threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hello all,\n\nI am trying to help the Django project by investigating if there should \nbe some default batch size limits for insert and delete queries. This is \nrealted to a couple of tickets which deal with SQLite's inability to \ndeal with more than 1000 parameters in a single query. That backend \nneeds a limit anyways. It might be possible to implement default limits \nfor other backends at the same time if that seems necessary.\n\nIf I am not mistaken, there are no practical hard limits. So, the \nquestion is if performance is expected to collapse at some point.\n\nLittle can be assumed about the schema or the environment. The inserts \nand deletes are going to be done in one transaction. Foreign keys are \nindexed and they are DEFERRABLE INITIALLY DEFERRED by default. \nPostgreSQL version can be anything from 8.2 on.\n\nThe queries will be of form:\n insert into some_table(col1, col2) values (val1, val2), (val3, \nval4), ...;\nand\n delete from some_table where PK in (list_of_pk_values);\n\nSo, is there some common wisdom about the batch sizes? Or is it better \nto do the inserts and deletes in just one batch? I think the case for \nperformance problems needs to be strong before default limits are \nconsidered for PostgreSQL.\n\nThe tickets in question are:\nhttps://code.djangoproject.com/ticket/17788 and \nhttps://code.djangoproject.com/ticket/16426\n\n - Anssi Kääriäinen\n",
"msg_date": "Wed, 29 Feb 2012 12:20:15 +0200",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large insert and delete batches"
},
{
"msg_contents": "Quoting myself:\n\"\"\"\nSo, is there some common wisdom about the batch sizes? Or is it better\nto do the inserts and deletes in just one batch? I think the case for\nperformance problems needs to be strong before default limits are\nconsidered for PostgreSQL.\n\"\"\"\n\nI did a little test about this. My test was to see if there is any interesting difference\nin performance between doing queries in small batches vs doing them in one go.\n\nThe test setup is simple: one table with an integer primary key containing a million rows.\nThe queries are \"select * from the_table where id = ANY(ARRAY[list_of_numbers])\"\nand the similar delete, too.\n\nFor any sane amount of numbers in the list, the result is that doing the queries in smaller\nbatches might be a little faster, but nothing conclusive found. However, once you go into\nmillions of items in the list, the query will OOM my Postgres server. With million items\nin the list the process uses around 700MB of memory, 2 million items is 1.4GB, and beyond\nthat it is an OOM condition. The problem seems to be the array which takes all the memory.\nSo, you can OOM the server by doing \"SELECT ARRAY[large_enough_list_of_numbers]\".\n\nConclusion: as long as you are not doing anything really stupid it seems that there isn't any important\nperformance reasons to split the bulk queries into smaller batches.\n\nFor inserts the conclusion is similar. A lot of memory is used if you go to the millions of items range,\nbut otherwise it seems it doesn't matter if you do many smaller batches versus one larger batch.\n\n - Anssi",
"msg_date": "Thu, 1 Mar 2012 21:06:40 +0200",
"msg_from": "=?iso-8859-1?Q?K=E4=E4ri=E4inen_Anssi?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large insert and delete batches"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 21:06, Kääriäinen Anssi <[email protected]> wrote:\n> The queries are \"select * from the_table where id = ANY(ARRAY[list_of_numbers])\"\n> and the similar delete, too.\n\n> [...] However, once you go into\n> millions of items in the list, the query will OOM my Postgres server.\n\nThe problem with IN() and ARRAY[] is that the whole list of numbers\nhas to be parsed by the SQL syntax parser, which has significant\nmemory and CPU overhead (it has to accept arbitrary expressions in the\nlist). But there's a shortcut around the parser: you can pass in the\nlist as an array literal string, e.g:\nselect * from the_table where id = ANY('{1,2,3,4,5}')\n\nThe SQL parser considers the value one long string and passes it to\nthe array input function, which is a much simpler routine. This should\nscale up much better.\n\nEven better if you could pass in the array as a query parameter, so\nthe SQL parser doesn't even see the long string -- but I think you\nhave to jump through some hoops to do that in psycopg2.\n\nRegards,\nMarti\n",
"msg_date": "Thu, 1 Mar 2012 22:51:52 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large insert and delete batches"
},
{
"msg_contents": "On 03/01/2012 10:51 PM, Marti Raudsepp wrote:\n> The problem with IN() and ARRAY[] is that the whole list of numbers\n> has to be parsed by the SQL syntax parser, which has significant\n> memory and CPU overhead (it has to accept arbitrary expressions in the\n> list). But there's a shortcut around the parser: you can pass in the\n> list as an array literal string, e.g:\n> select * from the_table where id = ANY('{1,2,3,4,5}')\nOK, that explains the memory usage.\n> The SQL parser considers the value one long string and passes it to\n> the array input function, which is a much simpler routine. This should\n> scale up much better.\n>\n> Even better if you could pass in the array as a query parameter, so\n> the SQL parser doesn't even see the long string -- but I think you\n> have to jump through some hoops to do that in psycopg2.\nLuckily there is no need to do any tricks. The question I was trying to \nseek answer for was should there be some default batch size for inserts \nand deletes in Django, and the answer seems clear: the problems appear \nonly when the batch sizes are enormous, so there doesn't seem to be a \nreason to have default limits. Actually, the batch sizes are so large \nthat it is likely the Python process will OOM before you can trigger \nproblems in the DB.\n\n - Anssi\n",
"msg_date": "Fri, 2 Mar 2012 14:51:53 +0200",
"msg_from": "=?UTF-8?B?QW5zc2kgS8Okw6RyacOkaW5lbg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large insert and delete batches"
}
] |
[
{
"msg_contents": "On PostgreSQL 9.1.1, I'm experiencing extremely slow/inefficient min/max queries against a partitioned table, despite the recent improvements made in version 9.1. I haven't seen this issue discussed since 9.1 was released, so I wanted to provide an example of the inefficient execution plan in case this is not a known issue with the new version.\n\nIn my case, the query analyzer chooses the wrong index to scan of the child table when the query is made against the parent table. The tables are partitioned by 'fctid'. The query 'SELECT max(date) FROM table WHERE fctid=301 and sec_id=1' correctly uses the index (sec_id, date) when querying against the child table (0.1ms), but when run against the parent table, the planner chooses to scan the (date, sec_id) primary key instead, resulting in a full table scan in some instances (49 minutes!).\n \nIn my example the parent case is empty and all child tables have non-overlapping check constraints. Below is the schema and execution plans.\n\nLet me know if you need anything else. Thanks, Robert\n\n\nParent table schema:\ntemplate1=# \\d f_data\n Table \"public.f_data\"\n Column | Type | Modifiers\n--------+----------+-----------\n sec_id | integer | not null\n date | date | not null\n fctid | smallint | not null\n value | real | not null\nIndexes:\n \"f_data_pkey\" PRIMARY KEY, btree (fctid, date, sec_id)\nTriggers:\n insert_f_data_trigger BEFORE INSERT ON f_data FOR EACH ROW EXECUTE PROCEDURE f_data_insert_trigger()\nNumber of child tables: 7 (Use \\d+ to list them.)\n\nChild table schema:\ntemplate1=# \\d f_data301\n Table \"public.f_data301\"\n Column | Type | Modifiers\n--------+----------+-----------\n sec_id | integer | not null\n date | date | not null\n fctid | smallint | not null\n value | real | not null\nIndexes:\n \"pk_f_data_rsi2\" PRIMARY KEY, btree (date, sec_id) CLUSTER\n \"f_data_rsi2_idx\" btree (sec_id, date)\nCheck constraints:\n \"f_data_rsi2_fctid_check\" CHECK (fctid = 301)\nInherits: f_data\n\n\ntemplate1=# EXPLAIN ANALYZE SELECT max(date) FROM f_data301 WHERE fctid=301 and sec_id=1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Result (cost=1.84..1.85 rows=1 width=0) (actual time=0.077..0.078 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1.84 rows=1 width=4) (actual time=0.074..0.074 rows=0 loops=1)\n -> Index Scan Backward using f_data_rsi2_idx on f_data301 (cost=0.00..6370.59 rows=3465 width=4) (a\n Index Cond: ((sec_id = 1) AND (date IS NOT NULL))\n Filter: (fctid = 301)\n Total runtime: 0.132 ms\n(7 rows)\n\n\ntemplate1=# EXPLAIN ANALYZE SELECT max(date) FROM f_data where fctid=301 and sec_id=1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Result (cost=522.10..522.11 rows=1 width=0) (actual time=2921439.560..2921439.561 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.02..522.10 rows=1 width=4) (actual time=2921439.554..2921439.554 rows=0 loops=1)\n -> Merge Append (cost=0.02..1809543.34 rows=3466 width=4) (actual time=2921439.551..2921439.551 row\n Sort Key: public.f_data.date\n -> Sort (cost=0.01..0.02 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)\n Sort Key: public.f_data.date\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on f_data (cost=0.00..0.00 rows=1 width=4) (actual time=0.002..0.002 rows=0\n Filter: ((date IS NOT NULL) AND (fctid = 301) AND (sec_id = 1))\n -> Index Scan Backward using pk_f_data_rsi2 on f_data301 f_data (cost=0.00..1809499.99 rows=3\n Index Cond: ((date IS NOT NULL) AND (sec_id = 1))\n Filter: (fctid = 301)\n Total runtime: 2921439.645 ms\n(14 rows)\n\n\ntemplate1=# select version();\n version\n---------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50), 64-bit\n\nRobert McGehee, CFA\nGeode Capital Management, LLC\nOne Post Office Square, 28th Floor | Boston, MA | 02109\nDirect: (617)392-8396\n\nThis e-mail, and any attachments hereto, are intended for use by the addressee(s) only and may contain information that is (i) confidential information of Geode Capital Management, LLC and/or its affiliates, and/or (ii) proprietary information of Geode Capital Management, LLC and/or its affiliates. If you are not the intended recipient of this e-mail, or if you have otherwise received this e-mail in error, please immediately notify me by telephone (you may call collect), or by e-mail, and please permanently delete the original, any print outs and any copies of the foregoing. Any dissemination, distribution or copying of this e-mail is strictly prohibited. \n\n\n",
"msg_date": "Wed, 29 Feb 2012 12:32:32 -0500",
"msg_from": "\"McGehee, Robert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inefficient min/max against partition (ver 9.1.1)"
},
{
"msg_contents": "\"McGehee, Robert\" <[email protected]> writes:\n> On PostgreSQL 9.1.1, I'm experiencing extremely slow/inefficient\n> min/max queries against a partitioned table, despite the recent\n> improvements made in version 9.1.\n\nThanks for the report. I believe this will fix it:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ef03b34550e3577c4be3baa25b70787f5646c57b\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Mar 2012 14:31:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient min/max against partition (ver 9.1.1) "
}
] |
[
{
"msg_contents": "Do you see any performance difference between the following approaches? The\nassumption is that most of the rows in the query will be requested often\nenough.\n\n \n\n1. SQL function.\n\nCREATE OR REPLACE FUNCTION X(IN a_id uuid, IN b_id uuid)\n\n RETURNS int\n\n STABLE\n\nAS $$\n\n SELECT count(1)\n\n FROM A, B\n\n WHERE a_join_id = b_join_id\n\n AND A.a_id = a_id\n\n AND B.b_id = b_id;\n\n$$ LANGUAGE SQL;\n\n \n\nSELECT X(a_id, b_id);\n\n \n\n2. View.\n\nCREATE OR REPLACE VIEW X AS \n\n SELECT a_id, b_id, count(1) cnt\n\n FROM A, B\n\n WHERE a_join_id = b_join_id\n\nGROUP BY (a_id, b_id)\n\n \n\nSELECT cnt FROM X WHERE X.a_id = a_id and X.B_id = b_id;\n\n \n\nThank you, \n\nIgor\n\n\nDo you see any performance difference between the following approaches? The assumption is that most of the rows in the query will be requested often enough. 1. SQL function.CREATE OR REPLACE FUNCTION X(IN a_id uuid, IN b_id uuid) RETURNS int STABLEAS $$ SELECT count(1) FROM A, B WHERE a_join_id = b_join_id AND A.a_id = a_id AND B.b_id = b_id;$$ LANGUAGE SQL; SELECT X(a_id, b_id); 2. View.CREATE OR REPLACE VIEW X AS SELECT a_id, b_id, count(1) cnt FROM A, B WHERE a_join_id = b_join_id GROUP BY (a_id, b_id) SELECT cnt FROM X WHERE X.a_id = a_id and X.B_id = b_id; Thank you, Igor",
"msg_date": "Wed, 29 Feb 2012 12:37:56 -0800",
"msg_from": "\"Igor Schtein\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of SQL Function versus View"
},
{
"msg_contents": "On Wed, Feb 29, 2012 at 3:37 PM, Igor Schtein <[email protected]> wrote:\n> Do you see any performance difference between the following approaches? The\n> assumption is that most of the rows in the query will be requested often\n> enough.\n>\n>\n>\n> 1. SQL function.\n>\n> CREATE OR REPLACE FUNCTION X(IN a_id uuid, IN b_id uuid)\n>\n> RETURNS int\n>\n> STABLE\n>\n> AS $$\n>\n> SELECT count(1)\n>\n> FROM A, B\n>\n> WHERE a_join_id = b_join_id\n>\n> AND A.a_id = a_id\n>\n> AND B.b_id = b_id;\n>\n> $$ LANGUAGE SQL;\n>\n>\n>\n> SELECT X(a_id, b_id);\n>\n>\n>\n> 2. View.\n>\n> CREATE OR REPLACE VIEW X AS\n>\n> SELECT a_id, b_id, count(1) cnt\n>\n> FROM A, B\n>\n> WHERE a_join_id = b_join_id\n>\n> GROUP BY (a_id, b_id)\n>\n>\n>\n> SELECT cnt FROM X WHERE X.a_id = a_id and X.B_id = b_id;\n\nYou should probably test this in your environment, but I'd expect the\nview to be better. Wrapping logic inside PL/pgsql functions\nneedlessly rarely turn outs to be a win.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 3 Apr 2012 10:21:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of SQL Function versus View"
},
{
"msg_contents": "\n\nOn 04/03/2012 10:21 AM, Robert Haas wrote:\n>\n> You should probably test this in your environment, but I'd expect the\n> view to be better. Wrapping logic inside PL/pgsql functions\n> needlessly rarely turn outs to be a win.\n\n\n\nRight, But also note that auto_explain is very useful in getting plans \nand times of queries nested in functions which can't easily be got \notherwise.\n\ncheers\n\nandrew\n",
"msg_date": "Tue, 03 Apr 2012 10:30:00 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of SQL Function versus View"
}
] |
[
{
"msg_contents": "Hello,\nmy example query (and explain) is:\n$ explain SELECT count(*) from (select * from users_profile order by id)\nu_p;\n QUERY PLAN\n---------------------------------------------------------------------------\n Aggregate (cost=1.06..1.07 rows=1 width=0)\n -> Sort (cost=1.03..1.03 rows=2 width=572)\n Sort Key: users_profile.id\n -> Seq Scan on users_profile (cost=0.00..1.02 rows=2 width=572)\n(4 rows)\n\nMeseems \"order by id\" can be ignored by planner. It should speed up\nquery without side effect. I know the query should be fixed but this is\nreal and simplified query from real application.\nDoes postgresql team think ppostgres should be smarter than user and fix\nuser queries? If answer is positive please treat this as \"feature request\".\nThank you and regards,\nMarcin.\n",
"msg_date": "Thu, 01 Mar 2012 12:45:28 +0100",
"msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[planner] Ignore \"order by\" in subselect if parrent do count(*)"
},
{
"msg_contents": "On 1 March 2012 12:45, Marcin Mirosław <[email protected]> wrote:\n\n> Hello,\n> my example query (and explain) is:\n> $ explain SELECT count(*) from (select * from users_profile order by id)\n> u_p;\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> Aggregate (cost=1.06..1.07 rows=1 width=0)\n> -> Sort (cost=1.03..1.03 rows=2 width=572)\n> Sort Key: users_profile.id\n> -> Seq Scan on users_profile (cost=0.00..1.02 rows=2 width=572)\n> (4 rows)\n>\n> Meseems \"order by id\" can be ignored by planner. It should speed up\n> query without side effect. I know the query should be fixed but this is\n> real and simplified query from real application.\n> Does postgresql team think ppostgres should be smarter than user and fix\n> user queries? If answer is positive please treat this as \"feature request\".\n> Thank you and regards,\n> Marcin.\n>\n>\nIf you have only 2 rows in the table, then the plan really doesn't matter\ntoo much. Sorting two rows would be really fast :)\n\nTry to check it with 10k rows.\n\nregards\nSzymon\n\nOn 1 March 2012 12:45, Marcin Mirosław <[email protected]> wrote:\nHello,\nmy example query (and explain) is:\n$ explain SELECT count(*) from (select * from users_profile order by id)\nu_p;\n QUERY PLAN\n---------------------------------------------------------------------------\n Aggregate (cost=1.06..1.07 rows=1 width=0)\n -> Sort (cost=1.03..1.03 rows=2 width=572)\n Sort Key: users_profile.id\n -> Seq Scan on users_profile (cost=0.00..1.02 rows=2 width=572)\n(4 rows)\n\nMeseems \"order by id\" can be ignored by planner. It should speed up\nquery without side effect. I know the query should be fixed but this is\nreal and simplified query from real application.\nDoes postgresql team think ppostgres should be smarter than user and fix\nuser queries? If answer is positive please treat this as \"feature request\".\nThank you and regards,\nMarcin.\nIf you have only 2 rows in the table, then the plan really doesn't matter too much. Sorting two rows would be really fast :)Try to check it with 10k rows.\nregardsSzymon",
"msg_date": "Thu, 1 Mar 2012 12:50:08 +0100",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent do count(*)"
},
{
"msg_contents": "W dniu 01.03.2012 12:50, Szymon Guz pisze:\nHi Szymon,\n> If you have only 2 rows in the table, then the plan really doesn't\n> matter too much. Sorting two rows would be really fast :)\n> \n> Try to check it with 10k rows.\n\nIt doesn't matter (in this case) how many records is in user_profile\ntable. Planner does sorting.\nHere is version with more rows:\n$ explain (analyze,verbose,buffers) SELECT count(*) from (select * from\nusers_profile order by id) u_p;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1593639.92..1593639.93 rows=1 width=0) (actual\ntime=11738.498..11738.498 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=2499 read=41749 written=10595, temp read=17107\nwritten=17107\n -> Sort (cost=1443640.26..1468640.21 rows=9999977 width=4) (actual\ntime=9804.461..10963.911 rows=10000000 loops=1)\n Output: users_profile.id\n Sort Key: users_profile.id\n Sort Method: external sort Disk: 136856kB\n Buffers: shared hit=2499 read=41749 written=10595, temp\nread=17107 written=17107\n -> Seq Scan on public.users_profile (cost=0.00..144247.77\nrows=9999977 width=4) (actual time=0.021..1192.202 rows=10000000 loops=1)\n Output: users_profile.id\n Buffers: shared hit=2499 read=41749 written=10595\n Total runtime: 11768.199 ms\n(12 rows)\n\nAnd without \"order by\":\n$ explain (analyze,verbose,buffers) SELECT count(*) from (select * from\nusers_profile ) u_p;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169247.71..169247.72 rows=1 width=0) (actual\ntime=1757.613..1757.613 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=2522 read=41726\n -> Seq Scan on public.users_profile (cost=0.00..144247.77\nrows=9999977 width=0) (actual time=0.032..946.166 rows=10000000 loops=1)\n Output: users_profile.id\n Buffers: shared hit=2522 read=41726\n Total runtime: 1757.656 ms\n(7 rows)\n",
"msg_date": "Thu, 01 Mar 2012 13:02:24 +0100",
"msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent\n do count(*)"
},
{
"msg_contents": "On 1 March 2012 13:02, Marcin Mirosław <[email protected]> wrote:\n\n> W dniu 01.03.2012 12:50, Szymon Guz pisze:\n> Hi Szymon,\n> > If you have only 2 rows in the table, then the plan really doesn't\n> > matter too much. Sorting two rows would be really fast :)\n> >\n> > Try to check it with 10k rows.\n>\n> It doesn't matter (in this case) how many records is in user_profile\n> table. Planner does sorting.\n> Here is version with more rows:\n> $ explain (analyze,verbose,buffers) SELECT count(*) from (select * from\n> users_profile order by id) u_p;\n> QUERY\n> PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1593639.92..1593639.93 rows=1 width=0) (actual\n> time=11738.498..11738.498 rows=1 loops=1)\n> Output: count(*)\n> Buffers: shared hit=2499 read=41749 written=10595, temp read=17107\n> written=17107\n> -> Sort (cost=1443640.26..1468640.21 rows=9999977 width=4) (actual\n> time=9804.461..10963.911 rows=10000000 loops=1)\n> Output: users_profile.id\n> Sort Key: users_profile.id\n> Sort Method: external sort Disk: 136856kB\n> Buffers: shared hit=2499 read=41749 written=10595, temp\n> read=17107 written=17107\n> -> Seq Scan on public.users_profile (cost=0.00..144247.77\n> rows=9999977 width=4) (actual time=0.021..1192.202 rows=10000000 loops=1)\n> Output: users_profile.id\n> Buffers: shared hit=2499 read=41749 written=10595\n> Total runtime: 11768.199 ms\n> (12 rows)\n>\n> And without \"order by\":\n> $ explain (analyze,verbose,buffers) SELECT count(*) from (select * from\n> users_profile ) u_p;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=169247.71..169247.72 rows=1 width=0) (actual\n> time=1757.613..1757.613 rows=1 loops=1)\n> Output: count(*)\n> Buffers: shared hit=2522 read=41726\n> -> Seq Scan on public.users_profile (cost=0.00..144247.77\n> rows=9999977 width=0) (actual time=0.032..946.166 rows=10000000 loops=1)\n> Output: users_profile.id\n> Buffers: shared hit=2522 read=41726\n> Total runtime: 1757.656 ms\n> (7 rows)\n>\n\n\nCould you provide the postgres version and the structure of users_profile\ntable (with indexes)?\n\n- Szymon\n\nOn 1 March 2012 13:02, Marcin Mirosław <[email protected]> wrote:\nW dniu 01.03.2012 12:50, Szymon Guz pisze:\nHi Szymon,\n> If you have only 2 rows in the table, then the plan really doesn't\n> matter too much. Sorting two rows would be really fast :)\n>\n> Try to check it with 10k rows.\n\nIt doesn't matter (in this case) how many records is in user_profile\ntable. Planner does sorting.\nHere is version with more rows:\n$ explain (analyze,verbose,buffers) SELECT count(*) from (select * from\nusers_profile order by id) u_p;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1593639.92..1593639.93 rows=1 width=0) (actual\ntime=11738.498..11738.498 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=2499 read=41749 written=10595, temp read=17107\nwritten=17107\n -> Sort (cost=1443640.26..1468640.21 rows=9999977 width=4) (actual\ntime=9804.461..10963.911 rows=10000000 loops=1)\n Output: users_profile.id\n Sort Key: users_profile.id\n Sort Method: external sort Disk: 136856kB\n Buffers: shared hit=2499 read=41749 written=10595, temp\nread=17107 written=17107\n -> Seq Scan on public.users_profile (cost=0.00..144247.77\nrows=9999977 width=4) (actual time=0.021..1192.202 rows=10000000 loops=1)\n Output: users_profile.id\n Buffers: shared hit=2499 read=41749 written=10595\n Total runtime: 11768.199 ms\n(12 rows)\n\nAnd without \"order by\":\n$ explain (analyze,verbose,buffers) SELECT count(*) from (select * from\nusers_profile ) u_p;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=169247.71..169247.72 rows=1 width=0) (actual\ntime=1757.613..1757.613 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=2522 read=41726\n -> Seq Scan on public.users_profile (cost=0.00..144247.77\nrows=9999977 width=0) (actual time=0.032..946.166 rows=10000000 loops=1)\n Output: users_profile.id\n Buffers: shared hit=2522 read=41726\n Total runtime: 1757.656 ms\n(7 rows)\nCould you provide the postgres version and the structure of users_profile table (with indexes)?- Szymon",
"msg_date": "Thu, 1 Mar 2012 13:09:17 +0100",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent do count(*)"
},
{
"msg_contents": "W dniu 01.03.2012 13:09, Szymon Guz pisze:\n> Could you provide the postgres version and the structure of\n> users_profile table (with indexes)?\n\nArgh, i forgot about version. It's postgresql-9.1.3.\nI don't think structre of users_profile is important here. Me idea is\nlet planner ignore sorting completly. I don't want to have sort quicker\n(in this case, surely;)), i'd like to skip sorting completly because it\ndoesn't influence for query result.\nTable isn't \"real life\", it only demonstrates than planner sometimes can\nsafely skip some steps.\nRegards,\nMarcin\n",
"msg_date": "Thu, 01 Mar 2012 13:19:03 +0100",
"msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent\n do count(*)"
},
{
"msg_contents": "Marcin Miros*aw<[email protected]> wrote:\n \n> SELECT count(*)\n> from (select * from users_profile order by id) u_p;\n \n> \"order by id\" can be ignored by planner.\n \nThis has been discussed before. Certainly not all ORDER BY clauses\nwithin query steps can be ignored, so there would need to be code to\ndetermine whether it was actually useful, which wouldn't be free,\neither in terms of planning time or code maintenance. It wasn't\njudged to be worth the cost. If you want to avoid the cost of the\nsort, don't specify ORDER BY where it doesn't matter.\n \n-Kevin\n",
"msg_date": "Thu, 01 Mar 2012 10:50:57 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if\n parrent do count(*)"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Marcin Miros*aw<[email protected]> wrote:\n>> SELECT count(*)\n>> from (select * from users_profile order by id) u_p;\n \n>> \"order by id\" can be ignored by planner.\n \n> This has been discussed before. Certainly not all ORDER BY clauses\n> within query steps can be ignored, so there would need to be code to\n> determine whether it was actually useful, which wouldn't be free,\n> either in terms of planning time or code maintenance. It wasn't\n> judged to be worth the cost. If you want to avoid the cost of the\n> sort, don't specify ORDER BY where it doesn't matter.\n\nConsidering that ORDER BY in a subquery isn't even legal per spec,\nthere does not seem to be any tenable argument for supposing that\na user wrote it there \"by accident\". It's much more likely that\nhe had some semantic reason for it (say, an order-sensitive function\nin a higher query level) and that we'd break his results by ignoring\nthe ORDER BY. I doubt that very many of the possible reasons for\nneeding ordered output are reliably detectable by the planner, either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Mar 2012 12:50:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent do count(*) "
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 9:50 AM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Marcin Miros*aw<[email protected]> wrote:\n>>> SELECT count(*)\n>>> from (select * from users_profile order by id) u_p;\n>\n>>> \"order by id\" can be ignored by planner.\n>\n>> This has been discussed before. Certainly not all ORDER BY clauses\n>> within query steps can be ignored, so there would need to be code to\n>> determine whether it was actually useful, which wouldn't be free,\n>> either in terms of planning time or code maintenance. It wasn't\n>> judged to be worth the cost. If you want to avoid the cost of the\n>> sort, don't specify ORDER BY where it doesn't matter.\n>\n> Considering that ORDER BY in a subquery isn't even legal per spec,\n\nThat's surprising ... normally it won't affect the result, but with an\noffset or limit it would. Does the offset or limit change the \"not\neven legal\" part? Something like:\n\n select * from foo where foo_id in (select bar_id from bar order by\nbar_id offset 10 limit 10);\n\nCraig\n\n> there does not seem to be any tenable argument for supposing that\n> a user wrote it there \"by accident\". It's much more likely that\n> he had some semantic reason for it (say, an order-sensitive function\n> in a higher query level) and that we'd break his results by ignoring\n> the ORDER BY. I doubt that very many of the possible reasons for\n> needing ordered output are reliably detectable by the planner, either.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Mar 2012 10:19:31 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent do count(*)"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> On Thu, Mar 1, 2012 at 9:50 AM, Tom Lane <[email protected]> wrote:\n>> Considering that ORDER BY in a subquery isn't even legal per spec,\n\n> That's surprising ... normally it won't affect the result, but with an\n> offset or limit it would. Does the offset or limit change the \"not\n> even legal\" part?\n\nWell, actually, the SQL standard didn't have anything comparable to\noffset/limit until SQL:2008, either. But I have to take back my\nstatement above. It wasn't legal in SQL99, but evidently they added it\nin the 2003 or 2008 edition, presumably to go with the limit\nfunctionality.\n\nAnyway, the long and the short of it is that people depend on ORDER BY\nin subqueries to be honored, and we're not going to break that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Mar 2012 16:29:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [planner] Ignore \"order by\" in subselect if parrent do count(*) "
}
] |
[
{
"msg_contents": "Hello,\n\nWe have a table with about 60M records, almost all of which in one of\ntwo statuses ('done', 'failed') and a few of them, usually < 1000, in\ndifferent transient statuses. We also have a partial index indexing\nthe transient items: where status not in ('done', 'failed'). Stats are\nabout right for the status field:\n\n stavalues1 | stanumbers1\n---------------+---------------------\n {done,failed} | {0.541767,0.458233}\n\nTrying to access only the transient items leads to very bad records\nestimations, and the choice of poor query plans, as the estimate is to\nread some 25% of the items table (so joins are often performed against\nfull scans of other large tables instead of using indexes):\n\nexplain analyze select count(*) from items where status not in\n('done', 'failed');\n\n Aggregate (cost=2879903.86..2879903.87 rows=1 width=0) (actual\ntime=0.460..0.461 rows=1 loops=1)\n -> Bitmap Heap Scan on items (cost=3674.23..2843184.08\nrows=14687908 width=0) (actual time=0.393..0.453 rows=20 loops=1)\n Recheck Cond: (((status)::text <> 'done'::text) AND\n((status)::text <> 'failed'::text))\n -> Bitmap Index Scan on i_items_transient_status\n(cost=0.00..2.26 rows=14687908 width=0) (actual time=0.381..0.381\nrows=38 loops=1)\n\nLooking at these estimate of the rows in the table (59164756) and the\nestimate of the filtered rows (14687908), looks like the planner is\ncalculating the probability of the status being neither done nor\nfailed as two events independent events:\n\n=# select 59164555 * (1 - 0.541767) * (1 - 0.458233);\n 14687927.231665933605\n\nwhile it is clear (at least in the original query specification,\nbefore splitting the condition in two ANDed parts) that the two events\nare disjoint, so the probability should be calculated as 1 - (p(done)\n+ p(failed)) instead of (1 - p(done)) * (1 - p(failed)).\n\nWriting the query without the \"not\", listing the other statuses, leads\nto a correct plan instead.\n\nIs this a known planner shortcoming or something unexpected, to be\nescalated to -bugs? Server version is 9.0.1.\n\n-- Daniele\n",
"msg_date": "Thu, 1 Mar 2012 16:40:14 +0000",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad estimation for \"where field not in\""
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 6:40 PM, Daniele Varrazzo\n<[email protected]> wrote:\n> Is this a known planner shortcoming or something unexpected, to be\n> escalated to -bugs? Server version is 9.0.1.\n\nThe relevant code is in scalararraysel() function. It makes the\nassumption that element wise comparisons are completely independent,\nwhile the exact opposite is true. This has been this way since\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=290166f93404d8759f4bf60ef1732c8ba9a52785\nintroduced it to version 8.2.\n\nAt least for equality and inequality ops it would be good to rework\nthe logic to aggregate with\ns1 = s1 + s2 and s1 = s1 + s2 - 1 correspondingly.\n\n--\nAnts Aasma\n",
"msg_date": "Thu, 1 Mar 2012 19:53:07 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimation for \"where field not in\""
},
{
"msg_contents": "Ants Aasma <[email protected]> writes:\n> On Thu, Mar 1, 2012 at 6:40 PM, Daniele Varrazzo\n> <[email protected]> wrote:\n>> Is this a known planner shortcoming or something unexpected, to be\n>> escalated to -bugs? Server version is 9.0.1.\n\n> The relevant code is in scalararraysel() function. It makes the\n> assumption that element wise comparisons are completely independent,\n> while the exact opposite is true. This has been this way since\n> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=290166f93404d8759f4bf60ef1732c8ba9a52785\n> introduced it to version 8.2.\n\n> At least for equality and inequality ops it would be good to rework\n> the logic to aggregate with\n> s1 = s1 + s2 and s1 = s1 + s2 - 1 correspondingly.\n\nYeah, I was about to make a similar proposal. In principle, when\nworking with a constant array, we could de-dup the array elements\nand then arrive at an exact result ... but that seems like expensive\noverkill, and in particular it'd be penalizing intelligently-written\nqueries (which wouldn't have dups in the array to start with) to benefit\nbadly-written ones. So it seems like the right thing is for\nscalararraysel to (1) check if the operator is equality or inequality,\nand if so (2) just assume the array elements are all different and so\nthe probabilities sum directly. If the operator is something else\nit's probably best to stick with the existing logic. We could probably\nalso protect ourselves a bit more by noting if the sum gives an\nimpossible result (probability > 1 or < 0) and falling back to the\nnormal calculation in that case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Mar 2012 16:18:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimation for \"where field not in\" "
}
] |
[
{
"msg_contents": "Hi folks,\n\nI have a system that racks up about 40M log lines per day. I'm able to COPY\nthe log files into a PostgreSQL table that looks like this:\n\nCREATE TABLE activity_unlogged\n(\n user_id character(24) NOT NULL,\n client_ip inet,\n hr_timestamp timestamp without time zone,\n locale character varying,\n log_id character(36),\n method character varying(6),\n server_ip inet,\n uri character varying,\n user_agent character varying\n)\n\nNow, I want to reduce that data to get the last activity that was performed\nby each user in any given hour. It should fit into a table like this:\n\nCREATE TABLE hourly_activity\n(\n activity_hour timestamp without time zone NOT NULL,\n user_id character(24) NOT NULL,\n client_ip inet,\n hr_timestamp timestamp without time zone,\n locale character varying,\n log_id character(36),\n method character varying(6),\n server_ip inet,\n uri character varying,\n user_agent character varying,\n CONSTRAINT hourly_activity_pkey PRIMARY KEY (activity_hour , user_id )\n)\n\nwhere activity_hour is date_trunc('hour', hr_timestamp); (N.B. the primary\nkey constraint)\n\nI am attempting to do that with the following:\n\nINSERT INTO hourly_activity\n SELECT DISTINCT date_trunc('hour', hr_timestamp) AS activity_hour,\nactivity_unlogged.user_id,\n client_ip, hr_timestamp, locale, log_id, method,\nserver_ip, uri, user_agent\n FROM activity_unlogged,\n (SELECT user_id, MAX(hr_timestamp) AS last_timestamp\n FROM activity_unlogged GROUP BY user_id, date_trunc('hour',\nhr_timestamp)) AS last_activity\n WHERE activity_unlogged.user_id = last_activity.user_id AND\nactivity_unlogged.hr_timestamp = last_activity.last_timestamp;\n\nI have two problems:\n\n 1. It's incredibly slow (like: hours). I assume this is because I am\n scanning through a huge unindexed table twice. I imagine there is a more\n efficient way to do this, but I can't think of what it is. If I were doing\n this in a procedural programming language, it might look something like:\n for row in activity_unlogged:\n if (date_trunc('hour', hr_timestamp), user_id) in\n hourly_activity[(activity_hour, user_id)]:\n if hr_timestamp > hourly_activity[(date_trunc('hour',\n hr_timestamp), user_id)][hr_timestamp]:\n hourly_activity <- row # UPDATE\n else:\n hourly_activity <- row # INSERT\n I suspect some implementation of this (hopefully my pseudocode is at\n least somewhat comprehensible) would be very slow as well, but at least it\n would only go through activity_unlogged once. (Then again, it would have\n to rescan hourly_activity each time, so it really wouldn't be any faster\n at all, would it?) I feel like there must be a more efficient way to do\n this in SQL though I can't put my finger on it.\n 2. Turns out (hr_timestamp, user_id) is not unique. So selecting WHERE\n activity_unlogged.user_id = last_activity.user_id AND\n activity_unlogged.hr_timestamp = last_activity.last_timestamp leads to\n multiple records leading to a primary key collision. In such cases, I don't\n really care which of the two rows are picked, I just want to make sure that\n no more than one row is inserted per user per hour. In fact, though I would\n prefer to get the last row for each hour, I could probably get much the\n same effect if I just limited it to one per hour. Though I don't know if\n that really helps at all.\n\nHi folks,I have a system that racks up about 40M log lines per day. I'm able to COPY the log files into a PostgreSQL table that looks like this:CREATE TABLE activity_unlogged\n( user_id character(24) NOT NULL, client_ip inet,\n hr_timestamp timestamp without time zone, locale character varying, log_id character(36),\n method character varying(6), server_ip inet, uri character varying,\n user_agent character varying)Now, I want to reduce that data to get the last activity that was performed by each user in any given hour. It should fit into a table like this:\nCREATE TABLE hourly_activity( activity_hour timestamp without time zone NOT NULL,\n user_id character(24) NOT NULL, client_ip inet, hr_timestamp timestamp without time zone,\n locale character varying, log_id character(36), method character varying(6),\n server_ip inet, uri character varying, user_agent character varying,\n CONSTRAINT hourly_activity_pkey PRIMARY KEY (activity_hour , user_id ))\nwhere activity_hour is date_trunc('hour', hr_timestamp); (N.B. the primary key constraint) I am attempting to do that with the following:INSERT INTO hourly_activity \n SELECT DISTINCT date_trunc('hour', hr_timestamp) AS activity_hour, activity_unlogged.user_id, client_ip, hr_timestamp, locale, log_id, method, server_ip, uri, user_agent \n FROM activity_unlogged, (SELECT user_id, MAX(hr_timestamp) AS last_timestamp\n FROM activity_unlogged GROUP BY user_id, date_trunc('hour', hr_timestamp)) AS last_activity WHERE activity_unlogged.user_id = last_activity.user_id AND activity_unlogged.hr_timestamp = last_activity.last_timestamp;\nI have two problems:It's incredibly slow (like: hours). I assume this is because I am scanning through a huge unindexed table twice. I imagine there is a more efficient way to do this, but I can't think of what it is. If I were doing this in a procedural programming language, it might look something like:\nfor row in activity_unlogged: if (date_trunc('hour', hr_timestamp), user_id) in hourly_activity[(activity_hour, user_id)]: if hr_timestamp > hourly_activity[(date_trunc('hour', hr_timestamp), user_id)][hr_timestamp]:\n hourly_activity <- row # UPDATE else: hourly_activity <- row # INSERT\nI suspect some implementation of this (hopefully my pseudocode is at least somewhat comprehensible) would be very slow as well, but at least it would only go through activity_unlogged once. (Then again, it would have to rescan hourly_activity each time, so it really wouldn't be any faster at all, would it?) I feel like there must be a more efficient way to do this in SQL though I can't put my finger on it.\nTurns out (hr_timestamp, user_id) is not unique. So selecting WHERE activity_unlogged.user_id = last_activity.user_id AND activity_unlogged.hr_timestamp = last_activity.last_timestamp leads to multiple records leading to a primary key collision. In such cases, I don't really care which of the two rows are picked, I just want to make sure that no more than one row is inserted per user per hour. In fact, though I would prefer to get the last row for each hour, I could probably get much the same effect if I just limited it to one per hour. Though I don't know if that really helps at all.",
"msg_date": "Thu, 1 Mar 2012 10:27:27 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "efficient data reduction (and deduping)"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 3:27 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> INSERT INTO hourly_activity\n> SELECT DISTINCT date_trunc('hour', hr_timestamp) AS activity_hour,\n> activity_unlogged.user_id,\n> client_ip, hr_timestamp, locale, log_id, method,\n> server_ip, uri, user_agent\n> FROM activity_unlogged,\n> (SELECT user_id, MAX(hr_timestamp) AS last_timestamp\n> FROM activity_unlogged GROUP BY user_id, date_trunc('hour',\n> hr_timestamp)) AS last_activity\n> WHERE activity_unlogged.user_id = last_activity.user_id AND\n> activity_unlogged.hr_timestamp = last_activity.last_timestamp;\n\nTry\n\nINSERT INTO hourly_activity\nSELECT ... everything from au1 ...\nFROM activity_unlogged au1\nLEFT JOIN activity_unlogged au2 ON au2.user_id = au1.user_id\n AND\ndate_trunc('hour', au2.hr_timestamp) = date_trunc('hour',\nau1.hr_timestamp)\n AND\nau2.hr_timestamp < au1.hr_timestamp\nWHERE au2.user_id is null;\n",
"msg_date": "Thu, 1 Mar 2012 15:35:26 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Hi folks,\n>\n> I have a system that racks up about 40M log lines per day. I'm able to COPY\n> the log files into a PostgreSQL table that looks like this:\n\nSince you're using a COPY command and the table has a simple column\nwith exactly the value you want, why not filter it using grep(1) or\nsomething similar and load the filtered result directly into the\nhourly table?\n\nCraig\n\n>\n> CREATE TABLE activity_unlogged\n> (\n> user_id character(24) NOT NULL,\n> client_ip inet,\n> hr_timestamp timestamp without time zone,\n> locale character varying,\n> log_id character(36),\n> method character varying(6),\n> server_ip inet,\n> uri character varying,\n> user_agent character varying\n> )\n>\n> Now, I want to reduce that data to get the last activity that was performed\n> by each user in any given hour. It should fit into a table like this:\n>\n> CREATE TABLE hourly_activity\n> (\n> activity_hour timestamp without time zone NOT NULL,\n> user_id character(24) NOT NULL,\n> client_ip inet,\n> hr_timestamp timestamp without time zone,\n> locale character varying,\n> log_id character(36),\n> method character varying(6),\n> server_ip inet,\n> uri character varying,\n> user_agent character varying,\n> CONSTRAINT hourly_activity_pkey PRIMARY KEY (activity_hour , user_id )\n> )\n>\n> where activity_hour is date_trunc('hour', hr_timestamp); (N.B. the primary\n> key constraint)\n>\n> I am attempting to do that with the following:\n>\n> INSERT INTO hourly_activity\n> SELECT DISTINCT date_trunc('hour', hr_timestamp) AS activity_hour,\n> activity_unlogged.user_id,\n> client_ip, hr_timestamp, locale, log_id, method,\n> server_ip, uri, user_agent\n> FROM activity_unlogged,\n> (SELECT user_id, MAX(hr_timestamp) AS last_timestamp\n> FROM activity_unlogged GROUP BY user_id, date_trunc('hour',\n> hr_timestamp)) AS last_activity\n> WHERE activity_unlogged.user_id = last_activity.user_id AND\n> activity_unlogged.hr_timestamp = last_activity.last_timestamp;\n>\n> I have two problems:\n>\n> It's incredibly slow (like: hours). I assume this is because I am scanning\n> through a huge unindexed table twice. I imagine there is a more efficient\n> way to do this, but I can't think of what it is. If I were doing this in a\n> procedural programming language, it might look something like:\n> for row in activity_unlogged:\n> if (date_trunc('hour', hr_timestamp), user_id) in\n> hourly_activity[(activity_hour, user_id)]:\n> if hr_timestamp > hourly_activity[(date_trunc('hour',\n> hr_timestamp), user_id)][hr_timestamp]:\n> hourly_activity <- row # UPDATE\n> else:\n> hourly_activity <- row # INSERT\n> I suspect some implementation of this (hopefully my pseudocode is at least\n> somewhat comprehensible) would be very slow as well, but at least it would\n> only go through activity_unlogged once. (Then again, it would have to\n> rescan hourly_activity each time, so it really wouldn't be any faster at\n> all, would it?) I feel like there must be a more efficient way to do this in\n> SQL though I can't put my finger on it.\n> Turns out (hr_timestamp, user_id) is not unique. So selecting WHERE\n> activity_unlogged.user_id = last_activity.user_id AND\n> activity_unlogged.hr_timestamp = last_activity.last_timestamp leads to\n> multiple records leading to a primary key collision. In such cases, I don't\n> really care which of the two rows are picked, I just want to make sure that\n> no more than one row is inserted per user per hour. In fact, though I would\n> prefer to get the last row for each hour, I could probably get much the same\n> effect if I just limited it to one per hour. Though I don't know if that\n> really helps at all.\n",
"msg_date": "Thu, 1 Mar 2012 10:35:27 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Now, I want to reduce that data to get the last activity that was performed\n> by each user in any given hour. It should fit into a table like this:\n>\n\nHow about:\n\n1) Create an expression based index on date_trunc('hour', hr_timestamp)\n2) Create a view on that showing the last value\n3) If you want to throw away the data use CREATE TABLE AS on the\nresults of the view.\n\nYou may also want to investigate window functions.\n\n-p\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Thu, 1 Mar 2012 10:40:20 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "Alessandro Gagliardi <[email protected]> wrote:\n \n> hr_timestamp timestamp without time zone,\n \nIn addition to the responses which more directly answer your\nquestion, I feel I should point out that this will not represent a\nsingle moment in time. At the end of Daylight Saving Time, the\nvalue will jump backward and you will run through a range of time\nwhich will overlap existing entries. There is almost never a good\nreason to use TIMESTAMP WITHOUT TIME ZONE -- TIMESTAMP WITH TIME\nZONE is required if you want the value to represent a moment in\ntime.\n \n-Kevin\n",
"msg_date": "Thu, 01 Mar 2012 12:51:42 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "I was thinking of adding an index, but thought it would be pointless since\nI would only be using the index once before dropping the table (after its\nloaded into hourly_activity). I assumed it would take longer to create the\nindex and then use it than to just seq scan once or twice. Am I wrong in\nthat assumption?\n\nOn Thu, Mar 1, 2012 at 10:40 AM, Peter van Hardenberg <[email protected]> wrote:\n\n> On Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > Now, I want to reduce that data to get the last activity that was\n> performed\n> > by each user in any given hour. It should fit into a table like this:\n> >\n>\n> How about:\n>\n> 1) Create an expression based index on date_trunc('hour', hr_timestamp)\n> 2) Create a view on that showing the last value\n> 3) If you want to throw away the data use CREATE TABLE AS on the\n> results of the view.\n>\n> You may also want to investigate window functions.\n>\n> -p\n>\n> --\n> Peter van Hardenberg\n> San Francisco, California\n> \"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n>\n\nI was thinking of adding an index, but thought it would be pointless since I would only be using the index once before dropping the table (after its loaded into hourly_activity). I assumed it would take longer to create the index and then use it than to just seq scan once or twice. Am I wrong in that assumption?\nOn Thu, Mar 1, 2012 at 10:40 AM, Peter van Hardenberg <[email protected]> wrote:\nOn Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Now, I want to reduce that data to get the last activity that was performed\n> by each user in any given hour. It should fit into a table like this:\n>\n\nHow about:\n\n1) Create an expression based index on date_trunc('hour', hr_timestamp)\n2) Create a view on that showing the last value\n3) If you want to throw away the data use CREATE TABLE AS on the\nresults of the view.\n\nYou may also want to investigate window functions.\n\n-p\n\n--\nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut",
"msg_date": "Thu, 1 Mar 2012 11:28:32 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "All of our servers run in UTC specifically to avoid this sort of problem.\nIt's kind of annoying actually, because we're a San Francisco company and\nso whenever I have to do daily analytics, I have to shift everything to\nPacific. But in this case it's handy. Thanks for the keen eye though.\n\nOn Thu, Mar 1, 2012 at 10:51 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> Alessandro Gagliardi <[email protected]> wrote:\n>\n> > hr_timestamp timestamp without time zone,\n>\n> In addition to the responses which more directly answer your\n> question, I feel I should point out that this will not represent a\n> single moment in time. At the end of Daylight Saving Time, the\n> value will jump backward and you will run through a range of time\n> which will overlap existing entries. There is almost never a good\n> reason to use TIMESTAMP WITHOUT TIME ZONE -- TIMESTAMP WITH TIME\n> ZONE is required if you want the value to represent a moment in\n> time.\n>\n> -Kevin\n>\n\nAll of our servers run in UTC specifically to avoid this sort of problem. It's kind of annoying actually, because we're a San Francisco company and so whenever I have to do daily analytics, I have to shift everything to Pacific. But in this case it's handy. Thanks for the keen eye though.\nOn Thu, Mar 1, 2012 at 10:51 AM, Kevin Grittner <[email protected]> wrote:\nAlessandro Gagliardi <[email protected]> wrote:\n\n> hr_timestamp timestamp without time zone,\n\nIn addition to the responses which more directly answer your\nquestion, I feel I should point out that this will not represent a\nsingle moment in time. At the end of Daylight Saving Time, the\nvalue will jump backward and you will run through a range of time\nwhich will overlap existing entries. There is almost never a good\nreason to use TIMESTAMP WITHOUT TIME ZONE -- TIMESTAMP WITH TIME\nZONE is required if you want the value to represent a moment in\ntime.\n\n-Kevin",
"msg_date": "Thu, 1 Mar 2012 11:29:00 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "Hah! Yeah, that might would work. Except that I suck at grep. :(\nPerhaps that's a weakness I should remedy.\n\nOn Thu, Mar 1, 2012 at 10:35 AM, Craig James <[email protected]> wrote:\n\n> On Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > Hi folks,\n> >\n> > I have a system that racks up about 40M log lines per day. I'm able to\n> COPY\n> > the log files into a PostgreSQL table that looks like this:\n>\n> Since you're using a COPY command and the table has a simple column\n> with exactly the value you want, why not filter it using grep(1) or\n> something similar and load the filtered result directly into the\n> hourly table?\n>\n> Craig\n>\n> >\n> > CREATE TABLE activity_unlogged\n> > (\n> > user_id character(24) NOT NULL,\n> > client_ip inet,\n> > hr_timestamp timestamp without time zone,\n> > locale character varying,\n> > log_id character(36),\n> > method character varying(6),\n> > server_ip inet,\n> > uri character varying,\n> > user_agent character varying\n> > )\n> >\n> > Now, I want to reduce that data to get the last activity that was\n> performed\n> > by each user in any given hour. It should fit into a table like this:\n> >\n> > CREATE TABLE hourly_activity\n> > (\n> > activity_hour timestamp without time zone NOT NULL,\n> > user_id character(24) NOT NULL,\n> > client_ip inet,\n> > hr_timestamp timestamp without time zone,\n> > locale character varying,\n> > log_id character(36),\n> > method character varying(6),\n> > server_ip inet,\n> > uri character varying,\n> > user_agent character varying,\n> > CONSTRAINT hourly_activity_pkey PRIMARY KEY (activity_hour , user_id )\n> > )\n> >\n> > where activity_hour is date_trunc('hour', hr_timestamp); (N.B. the\n> primary\n> > key constraint)\n> >\n> > I am attempting to do that with the following:\n> >\n> > INSERT INTO hourly_activity\n> > SELECT DISTINCT date_trunc('hour', hr_timestamp) AS activity_hour,\n> > activity_unlogged.user_id,\n> > client_ip, hr_timestamp, locale, log_id, method,\n> > server_ip, uri, user_agent\n> > FROM activity_unlogged,\n> > (SELECT user_id, MAX(hr_timestamp) AS last_timestamp\n> > FROM activity_unlogged GROUP BY user_id,\n> date_trunc('hour',\n> > hr_timestamp)) AS last_activity\n> > WHERE activity_unlogged.user_id = last_activity.user_id AND\n> > activity_unlogged.hr_timestamp = last_activity.last_timestamp;\n> >\n> > I have two problems:\n> >\n> > It's incredibly slow (like: hours). I assume this is because I am\n> scanning\n> > through a huge unindexed table twice. I imagine there is a more efficient\n> > way to do this, but I can't think of what it is. If I were doing this in\n> a\n> > procedural programming language, it might look something like:\n> > for row in activity_unlogged:\n> > if (date_trunc('hour', hr_timestamp), user_id) in\n> > hourly_activity[(activity_hour, user_id)]:\n> > if hr_timestamp > hourly_activity[(date_trunc('hour',\n> > hr_timestamp), user_id)][hr_timestamp]:\n> > hourly_activity <- row # UPDATE\n> > else:\n> > hourly_activity <- row # INSERT\n> > I suspect some implementation of this (hopefully my pseudocode is at\n> least\n> > somewhat comprehensible) would be very slow as well, but at least it\n> would\n> > only go through activity_unlogged once. (Then again, it would have to\n> > rescan hourly_activity each time, so it really wouldn't be any faster at\n> > all, would it?) I feel like there must be a more efficient way to do\n> this in\n> > SQL though I can't put my finger on it.\n> > Turns out (hr_timestamp, user_id) is not unique. So selecting WHERE\n> > activity_unlogged.user_id = last_activity.user_id AND\n> > activity_unlogged.hr_timestamp = last_activity.last_timestamp leads to\n> > multiple records leading to a primary key collision. In such cases, I\n> don't\n> > really care which of the two rows are picked, I just want to make sure\n> that\n> > no more than one row is inserted per user per hour. In fact, though I\n> would\n> > prefer to get the last row for each hour, I could probably get much the\n> same\n> > effect if I just limited it to one per hour. Though I don't know if that\n> > really helps at all.\n>\n\nHah! Yeah, that might would work. Except that I suck at grep. :(Perhaps that's a weakness I should remedy.On Thu, Mar 1, 2012 at 10:35 AM, Craig James <[email protected]> wrote:\nOn Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Hi folks,\n>\n> I have a system that racks up about 40M log lines per day. I'm able to COPY\n> the log files into a PostgreSQL table that looks like this:\n\nSince you're using a COPY command and the table has a simple column\nwith exactly the value you want, why not filter it using grep(1) or\nsomething similar and load the filtered result directly into the\nhourly table?\n\nCraig\n\n>\n> CREATE TABLE activity_unlogged\n> (\n> user_id character(24) NOT NULL,\n> client_ip inet,\n> hr_timestamp timestamp without time zone,\n> locale character varying,\n> log_id character(36),\n> method character varying(6),\n> server_ip inet,\n> uri character varying,\n> user_agent character varying\n> )\n>\n> Now, I want to reduce that data to get the last activity that was performed\n> by each user in any given hour. It should fit into a table like this:\n>\n> CREATE TABLE hourly_activity\n> (\n> activity_hour timestamp without time zone NOT NULL,\n> user_id character(24) NOT NULL,\n> client_ip inet,\n> hr_timestamp timestamp without time zone,\n> locale character varying,\n> log_id character(36),\n> method character varying(6),\n> server_ip inet,\n> uri character varying,\n> user_agent character varying,\n> CONSTRAINT hourly_activity_pkey PRIMARY KEY (activity_hour , user_id )\n> )\n>\n> where activity_hour is date_trunc('hour', hr_timestamp); (N.B. the primary\n> key constraint)\n>\n> I am attempting to do that with the following:\n>\n> INSERT INTO hourly_activity\n> SELECT DISTINCT date_trunc('hour', hr_timestamp) AS activity_hour,\n> activity_unlogged.user_id,\n> client_ip, hr_timestamp, locale, log_id, method,\n> server_ip, uri, user_agent\n> FROM activity_unlogged,\n> (SELECT user_id, MAX(hr_timestamp) AS last_timestamp\n> FROM activity_unlogged GROUP BY user_id, date_trunc('hour',\n> hr_timestamp)) AS last_activity\n> WHERE activity_unlogged.user_id = last_activity.user_id AND\n> activity_unlogged.hr_timestamp = last_activity.last_timestamp;\n>\n> I have two problems:\n>\n> It's incredibly slow (like: hours). I assume this is because I am scanning\n> through a huge unindexed table twice. I imagine there is a more efficient\n> way to do this, but I can't think of what it is. If I were doing this in a\n> procedural programming language, it might look something like:\n> for row in activity_unlogged:\n> if (date_trunc('hour', hr_timestamp), user_id) in\n> hourly_activity[(activity_hour, user_id)]:\n> if hr_timestamp > hourly_activity[(date_trunc('hour',\n> hr_timestamp), user_id)][hr_timestamp]:\n> hourly_activity <- row # UPDATE\n> else:\n> hourly_activity <- row # INSERT\n> I suspect some implementation of this (hopefully my pseudocode is at least\n> somewhat comprehensible) would be very slow as well, but at least it would\n> only go through activity_unlogged once. (Then again, it would have to\n> rescan hourly_activity each time, so it really wouldn't be any faster at\n> all, would it?) I feel like there must be a more efficient way to do this in\n> SQL though I can't put my finger on it.\n> Turns out (hr_timestamp, user_id) is not unique. So selecting WHERE\n> activity_unlogged.user_id = last_activity.user_id AND\n> activity_unlogged.hr_timestamp = last_activity.last_timestamp leads to\n> multiple records leading to a primary key collision. In such cases, I don't\n> really care which of the two rows are picked, I just want to make sure that\n> no more than one row is inserted per user per hour. In fact, though I would\n> prefer to get the last row for each hour, I could probably get much the same\n> effect if I just limited it to one per hour. Though I don't know if that\n> really helps at all.",
"msg_date": "Thu, 1 Mar 2012 11:30:06 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "Interesting solution. If I'm not mistaken, this does solve the problem of\nhaving two entries for the same user at the exact same time (which violates\nmy pk constraint) but it does so by leaving both of them out (since there\nis no au1.hr_timestamp > au2.hr_timestamp in that case). Is that right?\n\nOn Thu, Mar 1, 2012 at 10:35 AM, Claudio Freire <[email protected]>wrote:\n>\n> Try\n>\n> INSERT INTO hourly_activity\n> SELECT ... everything from au1 ...\n> FROM activity_unlogged au1\n> LEFT JOIN activity_unlogged au2 ON au2.user_id = au1.user_id\n> AND\n> date_trunc('hour', au2.hr_timestamp) = date_trunc('hour',\n> au1.hr_timestamp)\n> AND\n> au2.hr_timestamp < au1.hr_timestamp\n> WHERE au2.user_id is null;\n>\n\nInteresting solution. If I'm not mistaken, this does solve the problem of having two entries for the same user at the exact same time (which violates my pk constraint) but it does so by leaving both of them out (since there is no au1.hr_timestamp > au2.hr_timestamp in that case). Is that right?\nOn Thu, Mar 1, 2012 at 10:35 AM, Claudio Freire <[email protected]> wrote:\nTry\n\nINSERT INTO hourly_activity\nSELECT ... everything from au1 ...\nFROM activity_unlogged au1\nLEFT JOIN activity_unlogged au2 ON au2.user_id = au1.user_id\n AND\ndate_trunc('hour', au2.hr_timestamp) = date_trunc('hour',\nau1.hr_timestamp)\n AND\nau2.hr_timestamp < au1.hr_timestamp\nWHERE au2.user_id is null;",
"msg_date": "Thu, 1 Mar 2012 11:35:48 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 4:35 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Interesting solution. If I'm not mistaken, this does solve the problem of\n> having two entries for the same user at the exact same time (which violates\n> my pk constraint) but it does so by leaving both of them out (since there is\n> no au1.hr_timestamp > au2.hr_timestamp in that case). Is that right?\n\nYes, but it would have to be same *exact* time (not same hour).\n\nYou can use more fields to desambiguate too, ie:\n\nau1.hr_timestamp > au2.hr_timestamp or (au1.hr_timestamp ==\nau2.hr_timestamp and au1.some_other_field > au2.some_other_field)\n\nIf you have a sequential id to use in desambiguation, it would be best.\n",
"msg_date": "Thu, 1 Mar 2012 16:39:12 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "Ah, yes, that makes sense. Thank you!\n\nOn Thu, Mar 1, 2012 at 11:39 AM, Claudio Freire <[email protected]>wrote:\n\n> On Thu, Mar 1, 2012 at 4:35 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > Interesting solution. If I'm not mistaken, this does solve the problem of\n> > having two entries for the same user at the exact same time (which\n> violates\n> > my pk constraint) but it does so by leaving both of them out (since\n> there is\n> > no au1.hr_timestamp > au2.hr_timestamp in that case). Is that right?\n>\n> Yes, but it would have to be same *exact* time (not same hour).\n>\n> You can use more fields to desambiguate too, ie:\n>\n> au1.hr_timestamp > au2.hr_timestamp or (au1.hr_timestamp ==\n> au2.hr_timestamp and au1.some_other_field > au2.some_other_field)\n>\n> If you have a sequential id to use in desambiguation, it would be best.\n>\n\nAh, yes, that makes sense. Thank you!On Thu, Mar 1, 2012 at 11:39 AM, Claudio Freire <[email protected]> wrote:\nOn Thu, Mar 1, 2012 at 4:35 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Interesting solution. If I'm not mistaken, this does solve the problem of\n> having two entries for the same user at the exact same time (which violates\n> my pk constraint) but it does so by leaving both of them out (since there is\n> no au1.hr_timestamp > au2.hr_timestamp in that case). Is that right?\n\nYes, but it would have to be same *exact* time (not same hour).\n\nYou can use more fields to desambiguate too, ie:\n\nau1.hr_timestamp > au2.hr_timestamp or (au1.hr_timestamp ==\nau2.hr_timestamp and au1.some_other_field > au2.some_other_field)\n\nIf you have a sequential id to use in desambiguation, it would be best.",
"msg_date": "Thu, 1 Mar 2012 11:43:54 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 4:39 PM, Claudio Freire <[email protected]> wrote:\n>> Interesting solution. If I'm not mistaken, this does solve the problem of\n>> having two entries for the same user at the exact same time (which violates\n>> my pk constraint) but it does so by leaving both of them out (since there is\n>> no au1.hr_timestamp > au2.hr_timestamp in that case). Is that right?\n>\n> Yes, but it would have to be same *exact* time (not same hour).\n>\n> You can use more fields to desambiguate too, ie:\n>\n> au1.hr_timestamp > au2.hr_timestamp or (au1.hr_timestamp ==\n> au2.hr_timestamp and au1.some_other_field > au2.some_other_field)\n>\n> If you have a sequential id to use in desambiguation, it would be best.\n\nSorry for double posting - but you can also *generate* such an identifier:\n\ncreate sequence temp_seq;\n\nwith identified_au as ( select nextval('temp_seq') as id, * from\nhourly_activity )\nINSERT INTO hourly_activity\nSELECT ... everything from au1 ...\nFROM identified_au au1\nLEFT JOIN identified_au au2 ON au2.user_id = au1.user_id\n AND\ndate_trunc('hour', au2.hr_timestamp) = date_trunc('hour',\nau1.hr_timestamp)\n AND\nau2.hr_timestamp < au1.hr_timestamp OR (au2.hr_timestamp =\nau1.hr_timestamp AND au2.id < au1.id)\nWHERE au2.user_id is null;\n\nShould work if you have 9.x\n",
"msg_date": "Thu, 1 Mar 2012 16:44:22 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
},
{
"msg_contents": "Alessandro Gagliardi <[email protected]> wrote:\n \n> All of our servers run in UTC specifically to avoid this sort of\n> problem. It's kind of annoying actually, because we're a San\n> Francisco company and so whenever I have to do daily analytics, I\n> have to shift everything to Pacific. But in this case it's handy.\n \nIf that's working for you, you might just want to leave it alone;\nbut just so you know:\n \nIf you declare the column as TIMESTAMP WITH TIME ZONE, that it\nstores the timestamp as UTC regardless of what your server's\ndefinition of time zone is. (It doesn't actually store the time\nzone -- it just normalizes the time into UTC for storage.) On\nretrieval it shows that UTC moment in the local timezone. So, all\nthat work you're doing to switch the time zone info around would be\npretty automatic if you used the other type.\n \n-Kevin\n",
"msg_date": "Thu, 01 Mar 2012 14:12:02 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: efficient data reduction (and deduping)"
}
] |
[
{
"msg_contents": "Hi,\n(Sorry about double post, I just registered on the performance mailing \nlist, but sent the mail from the wrong account - if anyone responds, \nplease respond to this address)\n\nAnother issue I have encountered :)\n\nQuery optimizer glitch: \"...WHERE TRUE\" condition in union results in \nbad query plan when sorting the union on a column where for each \nunion-member there exists an index.\nFind minimal example below.\n\nCheers,\nClaus\n\nPostgreSQL 9.1.3 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real \n(Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit\n\n\nDROP TABLE a;\nDROP TABLE b;\n\nCREATE TABLE a AS SELECT generate_series id FROM generate_series(1, \n1000000);\nCREATE TABLE b AS SELECT generate_series id FROM generate_series(1, \n1000000);\nCREATE INDEX idx_a ON a(id);\nCREATE INDEX idx_b ON b(id);\n\nQ1: Returns immediately:\nSELECT c.id FROM (SELECT a.id FROM a UNION ALL SELECT b.id FROM b) c \nORDER BY c.id LIMIT 10;\n\nQ2: Takes a while:\nSELECT c.id FROM (SELECT a.id FROM a UNION ALL SELECT b.id FROM b WHERE \nTRUE) c ORDER BY c.id LIMIT 10;\n\n\nGood plan of Q1:\nEXPLAIN SELECT c.id FROM (SELECT a.id FROM a UNION ALL SELECT b.id FROM \nb) c ORDER BY c.id LIMIT 10;\n Limit (cost=0.01..0.57 rows=10 width=4)\n -> Result (cost=0.01..1123362.70 rows=20000000 width=4)\n -> Merge Append (cost=0.01..1123362.70 rows=20000000 width=4)\n Sort Key: a.id\n -> Index Scan using idx_a on a (cost=0.00..436681.35 \nrows=10000000 width=4)\n -> Index Scan using idx_b on b (cost=0.00..436681.35 \nrows=10000000 width=4)\n\nBad plan of Q2: Does sorting although index scan would be sufficient\nEXPLAIN SELECT c.id FROM (SELECT a.id FROM a UNION ALL SELECT b.id FROM \nb WHERE TRUE) c ORDER BY c.id LIMIT 10;\n Limit (cost=460344.41..460344.77 rows=10 width=4)\n -> Result (cost=460344.41..1172025.76 rows=20000000 width=4)\n -> Merge Append (cost=460344.41..1172025.76 rows=20000000 \nwidth=4)\n Sort Key: a.id\n -> Index Scan using idx_a on a (cost=0.00..436681.35 \nrows=10000000 width=4)\n -> Sort (cost=460344.40..485344.40 rows=10000000 width=4)\n Sort Key: b.id\n -> Seq Scan on b (cost=0.00..144248.00 \nrows=10000000 width=4)\n\n\n",
"msg_date": "Sat, 03 Mar 2012 23:43:17 +0100",
"msg_from": "Claus Stadler <[email protected]>",
"msg_from_op": true,
"msg_subject": "...WHERE TRUE\" condition in union results in bad query pla"
},
{
"msg_contents": "Claus Stadler <[email protected]> writes:\n> Query optimizer glitch: \"...WHERE TRUE\" condition in union results in \n> bad query plan ...\n\nYeah, this is because a nonempty WHERE clause defeats simplifying the\nUNION ALL into a simple \"append relation\" (cf is_safe_append_member()).\nThe planner will eventually figure out that WHERE TRUE is a no-op,\nbut that doesn't happen till later (and there are good reasons to do\nthings in that order).\n\nSooner or later I'd like to relax the restriction that appendrel members\ncan't have extra WHERE clauses, but don't hold your breath waiting...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Mar 2012 22:03:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ...WHERE TRUE\" condition in union results in bad query pla "
},
{
"msg_contents": "On Sat, Mar 3, 2012 at 10:03 PM, Tom Lane <[email protected]> wrote:\n> Claus Stadler <[email protected]> writes:\n>> Query optimizer glitch: \"...WHERE TRUE\" condition in union results in\n>> bad query plan ...\n>\n> Yeah, this is because a nonempty WHERE clause defeats simplifying the\n> UNION ALL into a simple \"append relation\" (cf is_safe_append_member()).\n> The planner will eventually figure out that WHERE TRUE is a no-op,\n> but that doesn't happen till later (and there are good reasons to do\n> things in that order).\n>\n> Sooner or later I'd like to relax the restriction that appendrel members\n> can't have extra WHERE clauses, but don't hold your breath waiting...\n\nDoes this comment need updating?\n\n * Note: the data structure assumes that append-rel members are single\n * baserels. This is OK for inheritance, but it prevents us from pulling\n * up a UNION ALL member subquery if it contains a join. While that could\n * be fixed with a more complex data structure, at present there's not much\n * point because no improvement in the plan could result.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 3 Apr 2012 10:57:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ...WHERE TRUE\" condition in union results in bad query pla"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Mar 3, 2012 at 10:03 PM, Tom Lane <[email protected]> wrote:\n>> Sooner or later I'd like to relax the restriction that appendrel members\n>> can't have extra WHERE clauses, but don't hold your breath waiting...\n\n> Does this comment need updating?\n\n> * Note: the data structure assumes that append-rel members are single\n> * baserels. This is OK for inheritance, but it prevents us from pulling\n> * up a UNION ALL member subquery if it contains a join. While that could\n> * be fixed with a more complex data structure, at present there's not much\n> * point because no improvement in the plan could result.\n\nNo, that's a different restriction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Apr 2012 11:35:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ...WHERE TRUE\" condition in union results in bad query pla "
}
] |
[
{
"msg_contents": "I'd be grateful for advice on specifying the new server\n\nWe presently have one main database server which is performing well. As\nour services expand we are thinking of bringing another database server\nto work with it, and back each up via Postgres 9.1 streaming replication\neach to a VM server -- at present we are doing pg_dumps twice a day and\nusing Postgres 8.4. \n\nThe existing server is a 2 x Quad core E5420 Xeon (2.5GHz) with 8GB of\nRAM with an LSI battery-backed RAID 10 array of 4no 10K SCSI disks,\nproviding about 230GB of usable storage, 150GB of which is on an LV\nproviding reconfigurable space for the databases which are served off an\nXFS formatted volume. \n\nWe presently have 90 databases using around 20GB of disk storage.\nHowever the larger databases are approaching 1GB in size, so in a year I\nimagine the disk requirement will have gone up to 40GB for the same\nnumber of databases. The server also serves some web content. \n\nPerformance is generally good, although we have a few slow running\nqueries due to poor plpgsql design. We would get faster performance, I\nbelieve, by providing more RAM. Sorry -- I should have some pg_bench\noutput to share here.\n\nI believe our existing server together with the new server should be\nable to serve 200--300 databases of our existing type, with around 100\ndatabases on our existing server and perhaps 150 on the new one. After\nthat we would be looking to get a third database server.\n\nI'm presently looking at the following kit:\n\n 1U chassis with 8 2.5\" disk bays\n 2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache\n 8 channel Areca ARC-1880i (PCI Express x8 card)\n presumably with BBU (can't see it listed at present)\n 2 x 300GB SAS 2.5\" disks for operating system\n (Possibly also 300GB SATA VelociRaptor/10K RPM/32MB cache \n RAID 1\n 4 x 300GB SAS 2.5\" storage disks\n RAID 10\n 48.0GB DDR3 1333MHz registered ECC (12x 4.0GB modules) \n\nMy major question about this chassis, which is 1U, is that it only takes\n2.5\" disks, and presently the supplier does not show 15K SAS disk\noptions. Assuming that I can get the BBU for the Areca card, and that\n15K SAS disks are available, I'd be grateful for comments on this\nconfiguration.\n \nRegards\nRory\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Sun, 4 Mar 2012 09:58:38 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice sought : new database server"
},
{
"msg_contents": "Hey!\n\nOn 04.03.2012 10:58, Rory Campbell-Lange wrote:\n> 1U chassis with 8 2.5\" disk bays\n> 2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache\n> 8 channel Areca ARC-1880i (PCI Express x8 card)\n> presumably with BBU (can't see it listed at present)\n> 2 x 300GB SAS 2.5\" disks for operating system\n> (Possibly also 300GB SATA VelociRaptor/10K RPM/32MB cache \n> RAID 1\n> 4 x 300GB SAS 2.5\" storage disks\n> RAID 10\n> 48.0GB DDR3 1333MHz registered ECC (12x 4.0GB modules) \n> \n\nSorry, no answer for your question and a bit offtopic.\n\n\nWhy do you take SAS disks for the OS and not much cheaper SATA ones?\n\n\nIm currently trying to get some informations together on this.\n\n\nRegards,\nMichi\n\n",
"msg_date": "Sun, 04 Mar 2012 12:50:09 +0100",
"msg_from": "Michael Friedl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Sun, Mar 4, 2012 at 2:58 AM, Rory Campbell-Lange\n<[email protected]> wrote:\n> I'd be grateful for advice on specifying the new server\n>\n> We presently have one main database server which is performing well. As\n> our services expand we are thinking of bringing another database server\n> to work with it, and back each up via Postgres 9.1 streaming replication\n> each to a VM server -- at present we are doing pg_dumps twice a day and\n> using Postgres 8.4.\n>\n> The existing server is a 2 x Quad core E5420 Xeon (2.5GHz) with 8GB of\n> RAM with an LSI battery-backed RAID 10 array of 4no 10K SCSI disks,\n> providing about 230GB of usable storage, 150GB of which is on an LV\n> providing reconfigurable space for the databases which are served off an\n> XFS formatted volume.\n>\n> We presently have 90 databases using around 20GB of disk storage.\n> However the larger databases are approaching 1GB in size, so in a year I\n> imagine the disk requirement will have gone up to 40GB for the same\n> number of databases. The server also serves some web content.\n>\n> Performance is generally good, although we have a few slow running\n> queries due to poor plpgsql design. We would get faster performance, I\n> believe, by providing more RAM. Sorry -- I should have some pg_bench\n> output to share here.\n\nRAM is always a good thing, and it's cheap enough that you can throw\n32 or 64G at a machine like this pretty cheaply.\n\n> I believe our existing server together with the new server should be\n> able to serve 200--300 databases of our existing type, with around 100\n> databases on our existing server and perhaps 150 on the new one. After\n> that we would be looking to get a third database server.\n>\n> I'm presently looking at the following kit:\n>\n> 1U chassis with 8 2.5\" disk bays\n> 2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache\n> 8 channel Areca ARC-1880i (PCI Express x8 card)\n> presumably with BBU (can't see it listed at present)\n> 2 x 300GB SAS 2.5\" disks for operating system\n> (Possibly also 300GB SATA VelociRaptor/10K RPM/32MB cache\n> RAID 1\n> 4 x 300GB SAS 2.5\" storage disks\n> RAID 10\n> 48.0GB DDR3 1333MHz registered ECC (12x 4.0GB modules)\n>\n> My major question about this chassis, which is 1U, is that it only takes\n> 2.5\" disks, and presently the supplier does not show 15K SAS disk\n> options. Assuming that I can get the BBU for the Areca card, and that\n> 15K SAS disks are available, I'd be grateful for comments on this\n> configuration.\n\nThe 15k RPM disks aren't that big of a deal unless you're pushing the\nbleeding edge on a transactional system. I'm gonna take a wild guess\nthat you're not doing heavy transactions, in which case, the BBU on\nthe areca is the single most important thing for you to get for good\nperformance. The areca 1880 is a great controller and is much much\neasier to configure than the LSI. Performance wise it's one of the\nfastest DAS controllers made.\n\nIf the guys you're looking at getting this from can't do custom\norders, find a white box dealer who can, like www.aberdeeninc.com. It\nmight not be on their site, but they can build dang near anything you\nwant.\n",
"msg_date": "Sun, 4 Mar 2012 07:19:33 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 04/03/12, Scott Marlowe ([email protected]) wrote:\n> On Sun, Mar 4, 2012 at 2:58 AM, Rory Campbell-Lange\n> <[email protected]> wrote:\n\n> > [About existing server...] We would get faster performance, I\n> > believe, by providing more RAM. Sorry -- I should have some pg_bench\n> > output to share here.\n> \n> RAM is always a good thing, and it's cheap enough that you can throw\n> 32 or 64G at a machine like this pretty cheaply.\n\nThanks for your note.\n\n> > � �1U chassis with 8 2.5\" disk bays\n> > � �2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache\n> > � �8 channel Areca ARC-1880i (PCI Express x8 card)\n> > � � �presumably with BBU (can't see it listed at present)\n> > � �2 x 300GB SAS �2.5\" disks for operating system\n> > � � �(Possibly also 300GB SATA VelociRaptor/10K RPM/32MB cache\n> > � � �RAID 1\n> > � �4 x 300GB SAS �2.5\" storage disks\n> > � � �RAID 10\n> > � �48.0GB DDR3 1333MHz registered ECC (12x 4.0GB modules)\n> >\n> > My major question about this chassis, which is 1U, is that it only takes\n> > 2.5\" disks, and presently the supplier does not show 15K SAS disk\n> > options. Assuming that I can get the BBU for the Areca card, and that\n> > 15K SAS disks are available, I'd be grateful for comments on this\n> > configuration.\n> \n> The 15k RPM disks aren't that big of a deal unless you're pushing the\n> bleeding edge on a transactional system. I'm gonna take a wild guess\n> that you're not doing heavy transactions, in which case, the BBU on\n> the areca is the single most important thing for you to get for good\n> performance. The areca 1880 is a great controller and is much much\n> easier to configure than the LSI. Performance wise it's one of the\n> fastest DAS controllers made.\n\nWe do have complex transactions, but I haven't benchmarked the\nperformance so I can't describe it. Few of the databases are at the many\nmillion row size at the moment, and we are moving to an agressive scheme\nof archiving old data, so we hope to keep things fast.\n\nHowever I thought 15k disks were a pre-requisite for a fast database\nsystem, if one can afford them? I assume if all else is equal the 1880\ncontroller will run 20-40% faster with 15k disks in a write-heavy\napplication. Also I would be grateful to learn if there is a good reason\nnot to use 2.5\" SATA disks.\n\n> If the guys you're looking at getting this from can't do custom\n> orders, find a white box dealer who can, like www.aberdeeninc.com. It\n> might not be on their site, but they can build dang near anything you\n> want.\n\nThanks for the note about Aberdeen. I've seen the advertisements, but\nnot tried them yet.\n\nThanks for your comments\nRory\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Sun, 4 Mar 2012 18:36:30 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 03/04/2012 03:58 AM, Rory Campbell-Lange wrote:\n> I'd be grateful for advice on specifying the new server\n>\n> providing about 230GB of usable storage, 150GB of which is on an LV\n> providing reconfigurable space for the databases which are served off an\n> XFS formatted volume.\n>\n\nDo you mean LVM? I've heard that LVM limits IO, so if you want full speed you might wanna drop LVM. (And XFS supports increasing fs size, and when are you ever really gonna want to decrease fs size?).\n\n-Andy\n",
"msg_date": "Sun, 04 Mar 2012 13:45:38 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Sun, Mar 4, 2012 at 11:36 AM, Rory Campbell-Lange\n<[email protected]> wrote:\n> On 04/03/12, Scott Marlowe ([email protected]) wrote:\n\n>> The 15k RPM disks aren't that big of a deal unless you're pushing the\n>> bleeding edge on a transactional system. I'm gonna take a wild guess\n>> that you're not doing heavy transactions, in which case, the BBU on\n>> the areca is the single most important thing for you to get for good\n>> performance. The areca 1880 is a great controller and is much much\n>> easier to configure than the LSI. Performance wise it's one of the\n>> fastest DAS controllers made.\n>\n> We do have complex transactions, but I haven't benchmarked the\n> performance so I can't describe it.\n\nYeah try to get a measurement of how many transactions per second\nyou're running at peak load, and if you're currently IO bound or CPU\nbound.\n\n> Few of the databases are at the many\n> million row size at the moment, and we are moving to an agressive scheme\n> of archiving old data, so we hope to keep things fast.\n\nThe key here is that your whole db can fit into memory. 48G is\ncutting it close if you're figuring on being at 40G in a year. I'd\nspec it out with 96G to start. That way if you want to set work_mem\nto 8 or 16M you can without worrying about running the machine out of\nmemory / scramming your OS file system cache with a few large queries\netc.\n\n> However I thought 15k disks were a pre-requisite for a fast database\n> system, if one can afford them?\n\nThe heads have to seek, settle and THEN you ahve to wait for the\nplatters to rotate under the head i.e. latency.\n\n> I assume if all else is equal the 1880\n> controller will run 20-40% faster with 15k disks in a write-heavy\n> application.\n> Also I would be grateful to learn if there is a good reason\n> not to use 2.5\" SATA disks.\n\nThe 10k 2.5 seagate drives have combined seek and latency figures of\nabout 7ms, while the15k 2.5 seagate drives have a combined time of\nabout 5ms. Even the fastest 3.5\" seagates average 6ms average seek\ntime, but with short stroking can get down to 4 or 5.\n\nNow all of this becomes moot if you compare them to SSDs, where the\nseek settle time is measured in microseconds or lower. The fastest\nspinning drive will look like a garbage truck next to the formula one\ncar that is the SSD. Until recently incompatabilites with RAID\ncontrollers and firmware bugs kept most SSDs out of the hosting\ncenter, or made the ones you could get horrifically expensive. The\nnewest generations of SSDs though seem to be working pretty well.\n\n>> If the guys you're looking at getting this from can't do custom\n>> orders, find a white box dealer who can, like www.aberdeeninc.com. It\n>> might not be on their site, but they can build dang near anything you\n>> want.\n>\n> Thanks for the note about Aberdeen. I've seen the advertisements, but\n> not tried them yet.\n\nThere's lots of others to choose from. In the past I've gotten\nfantastic customer service from aberdeen, and they've never steered me\nwrong. I've had my sales guy simply refuse to sell me a particular\ndrive because the failure rate was too high in the field, etc. They\ncross ship RAID cards overnight, and can build truly huge DAS servers\nif you need them. Like a lot of white box guys they specialize more\nin large storage arrays and virualization hardware, but there's a lot\nof similarity between that class of machine and a db server.\n",
"msg_date": "Sun, 4 Mar 2012 12:49:57 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Sun, Mar 4, 2012 at 12:45 PM, Andy Colson <[email protected]> wrote:\n> On 03/04/2012 03:58 AM, Rory Campbell-Lange wrote:\n>>\n>> I'd be grateful for advice on specifying the new server\n>>\n>> providing about 230GB of usable storage, 150GB of which is on an LV\n>> providing reconfigurable space for the databases which are served off an\n>> XFS formatted volume.\n>>\n>\n> Do you mean LVM? I've heard that LVM limits IO, so if you want full speed\n> you might wanna drop LVM. (And XFS supports increasing fs size, and when\n> are you ever really gonna want to decrease fs size?).\n\nIt certainly did in the past, I don't know if anyone's done any\nconclusive testing on in recently, but circa 2005 to 2008 we were\nrunning RHEL 4 and LVM limited the machine by quite a bit, with max\nsequential throughput dropping off by 50% or more on bigger ios\nsubsystems. I.e. a 600MB/s system would be lucky to hit 300MB/s with\na LV on top.\n",
"msg_date": "Sun, 4 Mar 2012 12:51:41 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 04/03/12, Scott Marlowe ([email protected]) wrote:\n> On Sun, Mar 4, 2012 at 11:36 AM, Rory Campbell-Lange\n> <[email protected]> wrote:\n> > On 04/03/12, Scott Marlowe ([email protected]) wrote:\n...\n[Description of system with 2 * 4 core Xeons, 8GB RAM, LSI card with\n4*15K SCSI drives in R10. We are looking for a new server to partner\nwith this one.]\n...\n\n> > We do have complex transactions, but I haven't benchmarked the\n> > performance so I can't describe it.\n> \n> Yeah try to get a measurement of how many transactions per second\n> you're running at peak load, and if you're currently IO bound or CPU\n> bound.\n\nOur existing server rarely goes above 7% sustained IO according to SAR.\nSimilarly, CPU at peak times is at 5-7% on the SAR average (across all 8\ncores). I'm not clear on how to read the memory stats, but the average\nkbcommit value for this morning's work is 12420282 which (assuming it\nmeans about 12GB memory) is 4GB more than physical RAM. However the\nsystem never swaps, probably due to our rather parsimonious postgres\nmemory settings.\n\n> > Few of the databases are at the many\n> > million row size at the moment, and we are moving to an agressive scheme\n> > of archiving old data, so we hope to keep things fast.\n> \n> The key here is that your whole db can fit into memory. 48G is\n> cutting it close if you're figuring on being at 40G in a year. I'd\n> spec it out with 96G to start. That way if you want to set work_mem\n> to 8 or 16M you can without worrying about running the machine out of\n> memory / scramming your OS file system cache with a few large queries\n> etc.\n\nThanks for this excellent point.\n\n> > However I thought 15k disks were a pre-requisite for a fast database\n> > system, if one can afford them?\n> \n> The heads have to seek, settle and THEN you ahve to wait for the\n> platters to rotate under the head i.e. latency.\n> \n> > I assume if all else is equal the 1880\n> > controller will run 20-40% faster with 15k disks in a write-heavy\n> > application.\n> > Also I would be grateful to learn if there is a good reason\n> > not to use 2.5\" SATA disks.\n> \n> The 10k 2.5 seagate drives have combined seek and latency figures of\n> about 7ms, while the15k 2.5 seagate drives have a combined time of\n> about 5ms. Even the fastest 3.5\" seagates average 6ms average seek\n> time, but with short stroking can get down to 4 or 5.\n> \n> Now all of this becomes moot if you compare them to SSDs, where the\n> seek settle time is measured in microseconds or lower. The fastest\n> spinning drive will look like a garbage truck next to the formula one\n> car that is the SSD. Until recently incompatabilites with RAID\n> controllers and firmware bugs kept most SSDs out of the hosting\n> center, or made the ones you could get horrifically expensive. The\n> newest generations of SSDs though seem to be working pretty well.\n\n From your comments it appears there are 3 options:\n\n 1. Card + BBU + SAS disks (10K/15K doesnt matter) + lots of RAM\n 2. Card + BBU + Raptors + lots of RAM\n 3. SSDs + lots of RAM\n\nIs this correct? If my databases are unlikely to be IO bound might it not\nbe better to go for cheaper drive subsystems (i.e. option 2) + lots of\nRAM, or alternatively SSDs based on the fact that we don't require much\nstorage space? I am unclear of what the options are on the\nhighly-reliable SSD front, and how to RAID SSD systems.\n\nAn ancillary point is that our systems are designed to have more rather\nthan fewer databases so that we can scale easily horizontally.\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Mon, 5 Mar 2012 12:26:12 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange <\[email protected]> wrote:\n\n> We do have complex transactions, but I haven't benchmarked the\n> performance so I can't describe it. Few of the databases are at the many\n> million row size at the moment, and we are moving to an agressive scheme\n> of archiving old data, so we hope to keep things fast.\n>\n> However I thought 15k disks were a pre-requisite for a fast database\n> system, if one can afford them? I assume if all else is equal the 1880\n> controller will run 20-40% faster with 15k disks in a write-heavy\n> application. Also I would be grateful to learn if there is a good reason\n> not to use 2.5\" SATA disks.\n>\n\nWithout those benchmarks, you can't really say what \"fast\" means. There\nare many bottlenecks that will limit your database's performance; the\ndisk's spinning rate is just one of them. Memory size, memory bandwidth,\nCPU, CPU cache size and speed, the disk I/O bandwidth in and out, the disk\nRPM, the presence of a BBU controller ... any of these can be the\nbottleneck. If you focus on the disk's RPM, you may be fixing a bottleneck\nthat you'll never reach.\n\nWe 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and\nhave been very impressed by the performance. 8 drives in RAID10, two in\nRAID1 for the WAL, one for Linux and one spare. This is on an 8-core\nsystem with 12 GB memory:\n\npgbench -i -s 100 -U test\npgbench -U test -c ... -t ...\n\n-c -t TPS\n5 20000 3777\n10 10000 2622\n20 5000 3759\n30 3333 5712\n40 2500 5953\n50 2000 6141\n\nCraig\n\nOn Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange <[email protected]> wrote:\n\nWe do have complex transactions, but I haven't benchmarked the\nperformance so I can't describe it. Few of the databases are at the many\nmillion row size at the moment, and we are moving to an agressive scheme\nof archiving old data, so we hope to keep things fast.\n\nHowever I thought 15k disks were a pre-requisite for a fast database\nsystem, if one can afford them? I assume if all else is equal the 1880\ncontroller will run 20-40% faster with 15k disks in a write-heavy\napplication. Also I would be grateful to learn if there is a good reason\nnot to use 2.5\" SATA disks.Without those benchmarks, you can't really say what \"fast\" means. There are many bottlenecks that will limit your database's performance; the disk's spinning rate is just one of them. Memory size, memory bandwidth, CPU, CPU cache size and speed, the disk I/O bandwidth in and out, the disk RPM, the presence of a BBU controller ... any of these can be the bottleneck. If you focus on the disk's RPM, you may be fixing a bottleneck that you'll never reach.\nWe 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and have been very impressed by the performance. 8 drives in RAID10, two in RAID1 for the WAL, one for Linux and one spare. This is on an 8-core system with 12 GB memory:\npgbench -i -s 100 -U testpgbench -U test -c ... -t ...\n-c -t TPS5 20000 3777\n10 10000 262220 5000 3759\n30 3333 571240 2500 5953\n50 2000 6141Craig",
"msg_date": "Mon, 5 Mar 2012 08:56:08 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 05/03/12, Craig James ([email protected]) wrote:\n> On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange <\n> [email protected]> wrote:\n> \n> > We do have complex transactions, but I haven't benchmarked the\n> > performance so I can't describe it. Few of the databases are at the many\n> > million row size at the moment, and we are moving to an agressive scheme\n> > of archiving old data, so we hope to keep things fast.\n> >\n> > However I thought 15k disks were a pre-requisite for a fast database\n> > system, if one can afford them? I assume if all else is equal the 1880\n> > controller will run 20-40% faster with 15k disks in a write-heavy\n> > application. Also I would be grateful to learn if there is a good reason\n> > not to use 2.5\" SATA disks.\n> \n> Without those benchmarks, you can't really say what \"fast\" means. There\n> are many bottlenecks that will limit your database's performance; the\n> disk's spinning rate is just one of them. Memory size, memory bandwidth,\n> CPU, CPU cache size and speed, the disk I/O bandwidth in and out, the disk\n> RPM, the presence of a BBU controller ... any of these can be the\n> bottleneck. If you focus on the disk's RPM, you may be fixing a bottleneck\n> that you'll never reach.\n> \n> We 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and\n> have been very impressed by the performance. 8 drives in RAID10, two in\n> RAID1 for the WAL, one for Linux and one spare. This is on an 8-core\n> system with 12 GB memory:\n> \n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n> \n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n\nThanks for this quick guide to using pgbenc. My 4-year old SCSI server\nwith 4 RAID10 disks behind an LSI card achieved the following on a\ncontended system:\n\n-c -t TPS\n5 20000 446\n10 10000 542\n20 5000 601\n30 3333 647\n\nThese results seem pretty lousy in comparison to yours. Interesting. \n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Mon, 5 Mar 2012 21:59:00 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 04/03/12, Rory Campbell-Lange ([email protected]) wrote:\n> I'd be grateful for advice on specifying a new server\n> \n...\n\n> The existing server is a 2 x Quad core E5420 Xeon (2.5GHz) with 8GB of\n> RAM with an LSI battery-backed RAID 10 array of 4no 10K SCSI disks,\n> providing about 230GB of usable storage, 150GB of which is on an LV\n> providing reconfigurable space for the databases which are served off an\n> XFS formatted volume. \n\nIn conversation on the list I've established that our current server\n(while fine for our needs) isn't performing terribly well. It could do\nwith more RAM and the disk IO seems slow. \n\nThat said, I'm keen to buy a new server to improve on the current\nperformance, so I've taken the liberty of replying here to my initial\nmail to ask specifically about new server recommendations. The initial\nplan was to share some of the load between the current and new server,\nand to buy something along the following lines:\n\n> 1U chassis with 8 2.5\" disk bays\n> 2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache\n> 8 channel Areca ARC-1880i (PCI Express x8 card)\n> presumably with BBU (can't see it listed at present)\n> 2 x 300GB SAS 2.5\" disks for operating system\n> (Possibly also 300GB SATA VelociRaptor/10K RPM/32MB cache \n> RAID 1\n> 4 x 300GB SAS 2.5\" storage disks\n> RAID 10\n> 48.0GB DDR3 1333MHz registered ECC (12x 4.0GB modules) \n\nHowever, after comments on the list, I realise I could get two servers\nwith the following specs for the same price as the above:\n\n 2x Intel Xeon E5620 Quad-Core / 4x 2.40GHz / 12MB cache\n 48.0GB DDR3 1066MHz registered ECC\n 4 channel Areca ARC-1212 (PCI Express x4 card) + BBU\n 4 x WD Raptors in RAID 10 (in 3.5\" adapters)\n\nIn other words, for GBP 5k I can get two servers that may better meet\nbetween them my requirements (lots of memory, reasonably fast disks)\nthan a single server. A salient point is that individual databases are\ncurrently less than 1GB in size but will grow perhaps to be 2GB over the\ncoming 18 months. The aim would be to contain all of the databases in\nmemory on each server. \n\nI'd be very grateful for comments on this strategy.\n\nRory\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Tue, 6 Mar 2012 11:12:19 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 03/04/2012 03:50 AM, Michael Friedl wrote:\n> Hey!\n>\n> On 04.03.2012 10:58, Rory Campbell-Lange wrote:\n>> 1U chassis with 8 2.5\" disk bays\n>> 2x Intel Xeon E5630 Quad-Core / 4x 2.53GHz / 12MB cache\n>> 8 channel Areca ARC-1880i (PCI Express x8 card)\n>> presumably with BBU (can't see it listed at present)\n>> 2 x 300GB SAS 2.5\" disks for operating system\n>> (Possibly also 300GB SATA VelociRaptor/10K RPM/32MB cache\n>> RAID 1\n>> 4 x 300GB SAS 2.5\" storage disks\n>> RAID 10\n>> 48.0GB DDR3 1333MHz registered ECC (12x 4.0GB modules)\n>>\n> Sorry, no answer for your question and a bit offtopic.\n>\n>\n> Why do you take SAS disks for the OS and not much cheaper SATA ones?\n>\n>\n>\n\nHere's Intel's (very general) take. Your OS disks may not justify SAS on \nperformance alone but other aspects may sway you.\nhttp://www.intel.com/support/motherboards/server/sb/CS-031831.htm\n\nCheers,\nSteve\n",
"msg_date": "Tue, 06 Mar 2012 16:32:30 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Mon, Mar 5, 2012 at 10:56 AM, Craig James <[email protected]> wrote:\n> On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange\n> <[email protected]> wrote:\n>>\n>> We do have complex transactions, but I haven't benchmarked the\n>> performance so I can't describe it. Few of the databases are at the many\n>> million row size at the moment, and we are moving to an agressive scheme\n>> of archiving old data, so we hope to keep things fast.\n>>\n>> However I thought 15k disks were a pre-requisite for a fast database\n>> system, if one can afford them? I assume if all else is equal the 1880\n>> controller will run 20-40% faster with 15k disks in a write-heavy\n>> application. Also I would be grateful to learn if there is a good reason\n>> not to use 2.5\" SATA disks.\n>\n>\n> Without those benchmarks, you can't really say what \"fast\" means. There are\n> many bottlenecks that will limit your database's performance; the disk's\n> spinning rate is just one of them. Memory size, memory bandwidth, CPU, CPU\n> cache size and speed, the disk I/O bandwidth in and out, the disk RPM, the\n> presence of a BBU controller ... any of these can be the bottleneck. If you\n> focus on the disk's RPM, you may be fixing a bottleneck that you'll never\n> reach.\n>\n> We 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and\n> have been very impressed by the performance. 8 drives in RAID10, two in\n> RAID1 for the WAL, one for Linux and one spare. This is on an 8-core system\n> with 12 GB memory:\n>\n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n>\n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n\n\nthose numbers are stupendous for 8 drive sata. how much shared\nbuffers do you have?\n\nmerlin\n",
"msg_date": "Wed, 7 Mar 2012 14:18:23 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Mon, Mar 5, 2012 at 9:56 AM, Craig James <[email protected]> wrote:\n> On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange\n> <[email protected]> wrote:\n>>\n>> We do have complex transactions, but I haven't benchmarked the\n>> performance so I can't describe it. Few of the databases are at the many\n>> million row size at the moment, and we are moving to an agressive scheme\n>> of archiving old data, so we hope to keep things fast.\n>>\n>> However I thought 15k disks were a pre-requisite for a fast database\n>> system, if one can afford them? I assume if all else is equal the 1880\n>> controller will run 20-40% faster with 15k disks in a write-heavy\n>> application. Also I would be grateful to learn if there is a good reason\n>> not to use 2.5\" SATA disks.\n>\n>\n> Without those benchmarks, you can't really say what \"fast\" means. There are\n> many bottlenecks that will limit your database's performance; the disk's\n> spinning rate is just one of them. Memory size, memory bandwidth, CPU, CPU\n> cache size and speed, the disk I/O bandwidth in and out, the disk RPM, the\n> presence of a BBU controller ... any of these can be the bottleneck. If you\n> focus on the disk's RPM, you may be fixing a bottleneck that you'll never\n> reach.\n>\n> We 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and\n> have been very impressed by the performance. 8 drives in RAID10, two in\n> RAID1 for the WAL, one for Linux and one spare. This is on an 8-core system\n> with 12 GB memory:\n>\n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n>\n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n\nJust wondering what your -c -t etc settings were, if the tests were\nlong enough to fill up your RAID controllers write cache or not.\n",
"msg_date": "Wed, 7 Mar 2012 13:45:45 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Wed, Mar 7, 2012 at 12:18 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Mar 5, 2012 at 10:56 AM, Craig James <[email protected]>\n> wrote:\n> > On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange\n> > <[email protected]> wrote:\n> >>\n> >> We do have complex transactions, but I haven't benchmarked the\n> >> performance so I can't describe it. Few of the databases are at the many\n> >> million row size at the moment, and we are moving to an agressive scheme\n> >> of archiving old data, so we hope to keep things fast.\n> >>\n> >> However I thought 15k disks were a pre-requisite for a fast database\n> >> system, if one can afford them? I assume if all else is equal the 1880\n> >> controller will run 20-40% faster with 15k disks in a write-heavy\n> >> application. Also I would be grateful to learn if there is a good reason\n> >> not to use 2.5\" SATA disks.\n> >\n> >\n> > Without those benchmarks, you can't really say what \"fast\" means. There\n> are\n> > many bottlenecks that will limit your database's performance; the disk's\n> > spinning rate is just one of them. Memory size, memory bandwidth, CPU,\n> CPU\n> > cache size and speed, the disk I/O bandwidth in and out, the disk RPM,\n> the\n> > presence of a BBU controller ... any of these can be the bottleneck. If\n> you\n> > focus on the disk's RPM, you may be fixing a bottleneck that you'll never\n> > reach.\n> >\n> > We 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and\n> > have been very impressed by the performance. 8 drives in RAID10, two in\n> > RAID1 for the WAL, one for Linux and one spare. This is on an 8-core\n> system\n> > with 12 GB memory:\n> >\n> > pgbench -i -s 100 -U test\n> > pgbench -U test -c ... -t ...\n> >\n> > -c -t TPS\n> > 5 20000 3777\n> > 10 10000 2622\n> > 20 5000 3759\n> > 30 3333 5712\n> > 40 2500 5953\n> > 50 2000 6141\n>\n>\n> those numbers are stupendous for 8 drive sata. how much shared\n> buffers do you have?\n>\n\nIt's actually 10 disks when you include the RAID1 for the WAL. Here are\nthe non-default performance parameters that might be of interest.\n\nshared_buffers = 1000MB\nwork_mem = 128MB\nsynchronous_commit = off\nfull_page_writes = off\nwal_buffers = 256kB\ncheckpoint_segments = 30\n\nI also do this at boot time (on a 12GB system):\n\necho 4294967296 >/proc/sys/kernel/shmmax # 4 GB shared memory\necho 4096 >/proc/sys/kernel/shmmni\necho 1572864 >/proc/sys/kernel/shmall # 6 GB max shared mem\n(block size is 4096 bytes)\n\nWe have two of these machines, and their performance is almost identical.\nOne isn't doing much yet, so if you're interested in other benchmarks (that\ndon't take me too long to run), let me know.\n\nCraig\n\nOn Wed, Mar 7, 2012 at 12:18 PM, Merlin Moncure <[email protected]> wrote:\nOn Mon, Mar 5, 2012 at 10:56 AM, Craig James <[email protected]> wrote:\n> On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange\n> <[email protected]> wrote:\n>>\n>> We do have complex transactions, but I haven't benchmarked the\n>> performance so I can't describe it. Few of the databases are at the many\n>> million row size at the moment, and we are moving to an agressive scheme\n>> of archiving old data, so we hope to keep things fast.\n>>\n>> However I thought 15k disks were a pre-requisite for a fast database\n>> system, if one can afford them? I assume if all else is equal the 1880\n>> controller will run 20-40% faster with 15k disks in a write-heavy\n>> application. Also I would be grateful to learn if there is a good reason\n>> not to use 2.5\" SATA disks.\n>\n>\n> Without those benchmarks, you can't really say what \"fast\" means. There are\n> many bottlenecks that will limit your database's performance; the disk's\n> spinning rate is just one of them. Memory size, memory bandwidth, CPU, CPU\n> cache size and speed, the disk I/O bandwidth in and out, the disk RPM, the\n> presence of a BBU controller ... any of these can be the bottleneck. If you\n> focus on the disk's RPM, you may be fixing a bottleneck that you'll never\n> reach.\n>\n> We 12 inexpensive 7K SATA drives with an LSI/3Ware 9650SE and a BBU, and\n> have been very impressed by the performance. 8 drives in RAID10, two in\n> RAID1 for the WAL, one for Linux and one spare. This is on an 8-core system\n> with 12 GB memory:\n>\n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n>\n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n\n\nthose numbers are stupendous for 8 drive sata. how much shared\nbuffers do you have?It's actually 10 disks when you include the RAID1 for the WAL. Here are the non-default performance parameters that might be of interest.shared_buffers = 1000MB\nwork_mem = 128MBsynchronous_commit = off full_page_writes = offwal_buffers = 256kBcheckpoint_segments = 30I also do this at boot time (on a 12GB system):echo 4294967296 >/proc/sys/kernel/shmmax # 4 GB shared memory\necho 4096 >/proc/sys/kernel/shmmniecho 1572864 >/proc/sys/kernel/shmall # 6 GB max shared mem (block size is 4096 bytes)We have two of these machines, and their performance is almost identical. One isn't doing much yet, so if you're interested in other benchmarks (that don't take me too long to run), let me know.\nCraig",
"msg_date": "Wed, 7 Mar 2012 13:07:41 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 03/07/2012 03:07 PM, Craig James wrote:\n\n> echo 4294967296 >/proc/sys/kernel/shmmax # 4 GB shared memory\n> echo 4096 >/proc/sys/kernel/shmmni\n> echo 1572864 >/proc/sys/kernel/shmall # 6 GB max shared mem (block size\n> is 4096 bytes)\n\nFor what it's worth, you can just make these entries in your \n/etc/sysctl.conf file and it'll do the same thing a little more cleanly:\n\nvm.shmmax = 4294967296\nvm.shmmni = 4096\nvm.shmall = 1572864\n\nTo commit changes made this way:\n\nsysctl -p\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 7 Mar 2012 16:24:25 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Wed, Mar 7, 2012 at 10:18 PM, Merlin Moncure <[email protected]> wrote:\n> those numbers are stupendous for 8 drive sata. how much shared\n> buffers do you have?\n\nCouple of things to notice:\n1) The benchmark can run fully in memory, although not 100% in shared_buffers.\n2) These are 100k transaction runs, meaning that probably no\ncheckpointing was going on.\n3) Given the amount of memory in the server, with dirty flush\nsettings the OS will do mostly sequential writes.\n\nJust ran a quick test. With synchronous_commit=off to simulate a BBU I\nhave no trouble hitting 11k tps on a single SATA disk. Seems to be\nmostly CPU bound on my workstation (Intel i5 2500K @ 3.9GHz, 16GB\nmemory), dirty writes stay in OS buffers, about 220tps/6MBps of\ntraffic to the xlog's, checkpoint dumps everything to OS cache which\nis then flushed at about 170MB/s (which probably would do nasty things\nto latency in real world cases). Unlogged tables are give me about 12k\ntps which seems to confirm mostly CPU bound.\n\nSo regardless if the benchmark is a good representation of the target\nworkload or not, it definitely isn't benchmarking the IO system.\n\nAnts Aasma\n",
"msg_date": "Thu, 8 Mar 2012 14:43:10 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 08/03/12, Ants Aasma ([email protected]) wrote:\n\n> So regardless if the benchmark is a good representation of the target\n> workload or not, it definitely isn't benchmarking the IO system.\n\nAt the risk of hijacking the thread I started, I'd be grateful for\ncomments on the following system IO results. Rather than using pgbench\n(which Ants responded about above), this uses fio. Our workload is\nseveral small databases totalling less than 40GB of disk space. The\nproposed system has 48GB RAM, 2 * quad core E5620 @ 2.40GHz and 4 WD\nRaptors behind an LSI SAS card. Is this IO respectable?\n\nLSI MegaRAID SAS 9260-8i\nFirmware: 12.12.0-0090\nKernel: 2.6.39.4\nHard disks: 4x WD6000BLHX\nTest done on 256GB volume\nBS = blocksize in bytes\n\n\nRAID 10\n--------------------------------------\nRead sequential\n\n BS MB/s IOPs\n 512 0129.26 264730.80\n 1024 0229.75 235273.40\n 4096 0363.14 092965.50\n 16384 0475.02 030401.50\n 65536 0472.79 007564.65\n131072 0428.15 003425.20\n--------------------------------------\nWrite sequential\n\n BS MB/s IOPs\n 512 0036.08 073908.00\n 1024 0065.61 067192.60\n 4096 0170.15 043560.40\n 16384 0219.80 014067.57\n 65536 0240.05 003840.91\n131072 0243.96 001951.74\n--------------------------------------\nRandom read\n\n BS MB/s IOPs\n 512 0001.50 003077.20\n 1024 0002.91 002981.40\n 4096 0011.59 002968.30\n 16384 0044.50 002848.28\n 65536 0156.96 002511.41\n131072 0170.65 001365.25\n--------------------------------------\nRandom write\n\n BS MB/s IOPs\n 512 0000.53 001103.60\n 1024 0001.15 001179.20\n 4096 0004.43 001135.30\n 16384 0017.61 001127.56\n 65536 0061.39 000982.39\n131072 0079.27 000634.16\n--------------------------------------\n\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Thu, 8 Mar 2012 13:27:37 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On Thu, Mar 8, 2012 at 4:43 AM, Ants Aasma <[email protected]> wrote:\n> On Wed, Mar 7, 2012 at 10:18 PM, Merlin Moncure <[email protected]> wrote:\n>> those numbers are stupendous for 8 drive sata. how much shared\n>> buffers do you have?\n>\n> Couple of things to notice:\n> 1) The benchmark can run fully in memory, although not 100% in shared_buffers.\n> 2) These are 100k transaction runs, meaning that probably no\n> checkpointing was going on.\n> 3) Given the amount of memory in the server, with dirty flush\n> settings the OS will do mostly sequential writes.\n>\n> Just ran a quick test. With synchronous_commit=off to simulate a BBU I\n> have no trouble hitting 11k tps on a single SATA disk.\n\nfsync=off might be a better way to simulate a BBU.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 8 Mar 2012 08:03:42 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "Wednesday, March 7, 2012, 11:24:25 PM you wrote:\n\n> On 03/07/2012 03:07 PM, Craig James wrote:\n\n>> echo 4294967296 >/proc/sys/kernel/shmmax # 4 GB shared memory\n>> echo 4096 >/proc/sys/kernel/shmmni\n>> echo 1572864 >/proc/sys/kernel/shmall # 6 GB max shared mem (block size\n>> is 4096 bytes)\n\n> For what it's worth, you can just make these entries in your \n> /etc/sysctl.conf file and it'll do the same thing a little more cleanly:\n\n> vm.shmmax = 4294967296\n> vm.shmmni = 4096\n> vm.shmall = 1572864\n\nShouldn't that be:\n\nkernel.shmmax = 4294967296\nkernel.shmmni = 4096\nkernel.shmall = 1572864\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 8 Mar 2012 17:15:58 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
},
{
"msg_contents": "On 03/08/2012 10:15 AM, Jochen Erwied wrote:\n\n> Shouldn't that be:\n>\n> kernel.shmmax = 4294967296\n> kernel.shmmni = 4096\n> kernel.shmall = 1572864\n\nOops! Yes. That's definitely it. I'm too accustomed to having those set \nautomatically, and then setting these:\n\nvm.swappiness = 0\nvm.dirty_background_ratio = 1\nvm.dirty_ratio = 10\n\nSorry about that!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 8 Mar 2012 10:18:30 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice sought : new database server"
}
] |
[
{
"msg_contents": "We have an optimizer problem regarding partitioned tables on 8.4.11.\n\nWe started partitioning a large table containing approx. 1 billion records.\n\nSo far, there is only the master table, called edifactmsgpart (which is empty) and 1 partition,\ncalled edifactmsgpart_pact.\nThere is a bigint column called emg_id with a btree-index on it.\n\n\\d edifactmsgpart_pact\n...\n... \"emp_emg_ept_i_pact\" btree (emg_id, ept_id)\n...\n\ngdw=> select relname, reltuples from pg_class where relname in( 'edifactmsgpart',\n'edifactmsgpart_pact' );\n relname | reltuples\n---------------------+-------------\n edifactmsgpart_pact | 1.03102e+09\n edifactmsgpart | 0\n\n\na select on the big partition yields a decent plan and performs as expected, lasting only a fraction\nof a second.\n\ngdw=> explain select min( emg_id ) from edifactmsgpart_pact;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Result (cost=2.05..2.06 rows=1 width=0)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..2.05 rows=1 width=8)\n -> Index Scan using emp_emg_ept_i_pact on edifactmsgpart_pact (cost=0.00..2109171123.79\nrows=1031020672 width=8)\n Filter: (emg_id IS NOT NULL)\n\n\ngdw=> select min( emg_id ) from edifactmsgpart_pact;\n min\n-----------\n 500008178\n\n=>>> very fast.\n\n\na select on the partitioned table, however, yields a... shall we call it \"sub-optimal\" plan:\n\ngdw=> explain select min( emg_id ) from edifactmsgpart;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Aggregate (cost=23521692.03..23521692.04 rows=1 width=8)\n -> Append (cost=0.00..20944139.42 rows=1031021042 width=8)\n -> Seq Scan on edifactmsgpart (cost=0.00..13.70 rows=370 width=8)\n -> Seq Scan on edifactmsgpart_pact edifactmsgpart (cost=0.00..20944125.72 rows=1031020672\nwidth=8)\n\nI would expect this to run half an hour or so, completely overloading the server...\n\nAny Ideas?\n\nKind regards\n Marc\n\n",
"msg_date": "Mon, 05 Mar 2012 16:11:01 +0100",
"msg_from": "Marc Schablewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning / Strange optimizer behaviour"
},
{
"msg_contents": "On 5 Březen 2012, 16:11, Marc Schablewski wrote:\n> We have an optimizer problem regarding partitioned tables on 8.4.11.\n...\n> gdw=> explain select min( emg_id ) from edifactmsgpart;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=23521692.03..23521692.04 rows=1 width=8)\n> -> Append (cost=0.00..20944139.42 rows=1031021042 width=8)\n> -> Seq Scan on edifactmsgpart (cost=0.00..13.70 rows=370\n> width=8)\n> -> Seq Scan on edifactmsgpart_pact edifactmsgpart\n> (cost=0.00..20944125.72 rows=1031020672\n> width=8)\n>\n> I would expect this to run half an hour or so, completely overloading the\n> server...\n>\n> Any Ideas?\n\nThis is a well known \"feature\" of pre-9.1 releases - it simply does not\nhandle min/max on partitioned tables well. There's even an example of a\nworkaround on the wiki:\nhttps://wiki.postgresql.org/wiki/Efficient_min/max_over_partitioned_table\n\nAnother option is to upgrade to 9.1 which handles this fine.\n\nTomas\n\n",
"msg_date": "Mon, 5 Mar 2012 16:20:22 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Strange optimizer behaviour"
},
{
"msg_contents": "Thanks for pointing me to that article. I totally forgot that the postgres wiki existed.\n\nUpdating is not an option at the moment, but we'll probably do so in the future. Until then I can\nlive with the workaround.\n\nKind regards,\n Marc\n\n",
"msg_date": "Mon, 05 Mar 2012 16:44:33 +0100",
"msg_from": "Marc Schablewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Strange optimizer behaviour"
}
] |
[
{
"msg_contents": "Where I work we are starting to look at using SSDs for database server \nstorage. Despite the higher per unit cost it is quite attractive to \nreplace 6-8 SAS drives in RAID 10 by a pair of SSD in RAID 1 that will \nprobably perform better and use less power.\n\nWhich brings up the question of should it be a pair in RAID 1 or just a \nsinge drive? Traditionally this would have been a no brainer \"Of course \nyou want RAID 1 or RAID 10\"! However our experience with SSD failure \nmodes points to firmware bugs as primary source of trouble - and these \nare likely to impact both drives (nearly) simultaneously in a RAID 1 \nconfiguration. Also the other major issue to watch - flash write limit \nexhaustion - is also likely to hit at the same time for a pair of drives \nin RAID 1.\n\nOne option to get around the simultaneous firmware failure is to be to \nget 2 *similar* drives from different manufactures (e.g OCZ Vertex 32 \nand Intel 520 - both Sandforce but different firmware setup). However \nusing different manufacturers drives is a pest (e.g different smart \ncodes maintained and/or different meanings for the same codes)\n\nWhat are other folks who are using SSDs doing?\n\nCheers\n\nMark\n",
"msg_date": "Tue, 06 Mar 2012 11:37:25 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSD and RAID"
},
{
"msg_contents": "Hi, a few personal opinions (your mileage may wary ...)\n\nOn 5.3.2012 23:37, Mark Kirkwood wrote:\n> Where I work we are starting to look at using SSDs for database server\n> storage. Despite the higher per unit cost it is quite attractive to\n> replace 6-8 SAS drives in RAID 10 by a pair of SSD in RAID 1 that will\n> probably perform better and use less power.\n\nProbably -> depends on the workload. Have you performed any tests to\ncheck it will actually improve the performance? If large portion of your\nworkload is sequential (e.g. seq. scans of large tables in DSS\nenvironment etc.) then SSDs are not worth the money. For OLTP workloads\nit's a clear winner.\n\nAnyway, don't get rid of the SAS drives completely - use them for WAL.\nThis is written in sequential manner and if you use PITR then WAL is the\nmost valuable piece of data (along with the base backup), so it's\nexactly the thing you want to place on reliable devices. And if you use\na decent controller with a BBWC to absorb the fsync, then it can give as\ngood performance as SSDs ...\n\n> Which brings up the question of should it be a pair in RAID 1 or just a\n> singe drive? Traditionally this would have been a no brainer \"Of course\n> you want RAID 1 or RAID 10\"! However our experience with SSD failure\n> modes points to firmware bugs as primary source of trouble - and these\n> are likely to impact both drives (nearly) simultaneously in a RAID 1\n> configuration. Also the other major issue to watch - flash write limit\n> exhaustion - is also likely to hit at the same time for a pair of drives\n> in RAID 1.\n\nYeah, matches my experience. Generally the same rules are valid for\nspinners too (use different batches / brands to build an array), but the\nfirmware bugs are quite annoying.\n\nUsing the SAS drives for WAL may actually help you here - do a base\nbackup regularly and keep the WAL files so that you can do a recovery if\nthe SSDs fail. You won't loose any data but it takes time to do the\nrecovery.\n\nIf you can't afford the downtime, you should setup a failover machine\nanyway. And AFAIK a standby does less writes than the master. At least\nthat's what I'd do.\n\nBut those are my personal oppinions - I suppose others may disagree.\n\nkind regards\nTomas\n",
"msg_date": "Tue, 06 Mar 2012 00:03:58 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD and RAID"
},
{
"msg_contents": "On 2012-03-05 23:37, Mark Kirkwood wrote:\n> Which brings up the question of should it be a pair in RAID 1 or just \n> a singe drive? Traditionally this would have been a no brainer \"Of \n> course you want RAID 1 or RAID 10\"! However our experience with SSD \n> failure modes points to firmware bugs as primary source of trouble - \n> and these are likely to impact both drives (nearly) simultaneously in \n> a RAID 1 configuration. Also the other major issue to watch - flash \n> write limit exhaustion - is also likely to hit at the same time for a \n> pair of drives in RAID 1.\n>\n> What are other folks who are using SSDs doing?\n\nThis is exactly the reason why in a set of new hardware I'm currently \nevaluating two different brands of manufacturers for the spindles \n(behind bbwc for wal, os, archives etc) and ssds (on mb sata ports). For \nthe ssd's we've chosen the Intel 710 and OCZ Vertex 2 PRO, however that \nlast one was EOL and OCZ offered to replace it by the Deneva 2 \n(http://www.oczenterprise.com/downloads/solutions/ocz-deneva2-r-mlc-2.5in_Product_Brief.pdf). \nStill waiting for a test Deneva though.\n\nOne thing to note is that linux software raid with md doesn't support \ndiscard, which might shorten the drive's expected lifetime. To get some \nnumbers I tested the raid 1 of ssd's setup for mediawear under a \nPostgreSQL load earlier, see \nhttp://archives.postgresql.org/pgsql-general/2011-11/msg00141.php\n\n<Greg Smith imitation mode on>I would recommended that for every ssd \nconsidered for production use, test the ssd with diskchecker.pl on a \nfilesystem that's mounted the same as you would with your data (e.g. \nwith xfs or ext4 with nobarrier), and also do a mediawear test like the \none described in the linked pgsql-general threar above, especially if \nyou're chosing to run on non-enterprise marketed ssds.</>\n\nregards,\nYeb\n\nPS: we applied the same philosophy (different brands) also to \nmotherboards, io controllers and memory, but after testing, we liked one \nIO controllers software so much more than the other so we chose to have \nonly one. Also stream memory performance of one motherboard showed a \nsignificant performance regression in the higher thread counts that we \ndecided to go for the other brand for all servers.\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n\n",
"msg_date": "Tue, 06 Mar 2012 09:17:06 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD and RAID"
},
{
"msg_contents": "On 03/06/2012 09:17 AM, Yeb Havinga wrote:\n> On 2012-03-05 23:37, Mark Kirkwood wrote:\n>> Which brings up the question of should it be a pair in RAID 1 or just a singe drive? Traditionally this would have been a no brainer \"Of course you want RAID 1 or RAID 10\"! However our experience with SSD failure modes points to firmware bugs as primary source of trouble - and these are likely to\n>> impact both drives (nearly) simultaneously in a RAID 1 configuration. Also the other major issue to watch - flash write limit exhaustion - is also likely to hit at the same time for a pair of drives in RAID 1.\n>>\n>> What are other folks who are using SSDs doing?\n>\n> This is exactly the reason why in a set of new hardware I'm currently evaluating two different brands of manufacturers for the spindles (behind bbwc for wal, os, archives etc) and ssds (on mb sata ports). For the ssd's we've chosen the Intel 710 and OCZ Vertex 2 PRO, however that last one was EOL\n> and OCZ offered to replace it by the Deneva 2 (http://www.oczenterprise.com/downloads/solutions/ocz-deneva2-r-mlc-2.5in_Product_Brief.pdf). Still waiting for a test Deneva though.\n>\n> One thing to note is that linux software raid with md doesn't support discard, which might shorten the drive's expected lifetime. To get some numbers I tested the raid 1 of ssd's setup for mediawear under a PostgreSQL load earlier, see http://archives.postgresql.org/pgsql-general/2011-11/msg00141.php\n>\n> <Greg Smith imitation mode on>I would recommended that for every ssd considered for production use, test the ssd with diskchecker.pl on a filesystem that's mounted the same as you would with your data (e.g. with xfs or ext4 with nobarrier), and also do a mediawear test like the one described in the\n> linked pgsql-general threar above, especially if you're chosing to run on non-enterprise marketed ssds.</>\n>\n> regards,\n> Yeb\n>\n> PS: we applied the same philosophy (different brands) also to motherboards, io controllers and memory, but after testing, we liked one IO controllers software so much more than the other so we chose to have only one. Also stream memory performance of one motherboard showed a significant performance\n> regression in the higher thread counts that we decided to go for the other brand for all servers.\n>\n\ncare to share motherboard winning model?\n\nthanks\nAndrea\n\n",
"msg_date": "Tue, 06 Mar 2012 09:34:15 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD and RAID"
},
{
"msg_contents": "On 2012-03-06 09:34, Andrea Suisani wrote:\n> On 03/06/2012 09:17 AM, Yeb Havinga wrote:\n>>\n>> PS: we applied the same philosophy (different brands) also to \n>> motherboards, io controllers and memory, but after testing, we liked \n>> one IO controllers software so much more than the other so we chose \n>> to have only one. Also stream memory performance of one motherboard \n>> showed a significant performance\n>> regression in the higher thread counts that we decided to go for the \n>> other brand for all servers.\n>>\n>\n> care to share motherboard winning model?\n>\n> thanks\n> Andrea\n>\n\nOn http://i.imgur.com/vfmvu.png is a graph of three systems, made with \nthe multi stream scaling (average of 10 tests if I remember correctly) test.\n\nThe red and blue are 2 X 12 core opteron 6168 systems with 64 GB DDR3 \n1333MHz in 8GB dimms\n\nRed is a Tyan S8230\nBlue is a Supermicro H8DGI-G\n\nWe tried a lot of things to rule out motherboards, such as swap memory \nof both systems, ensure BIOS settings are similar (e.g. ECC mode), \nupdate to latest BIOS where possible, but none of those settings \nimproved the memory performance drop. Both systems were installed with \nkickstarted Centos 6.2, so also no kernel setting differences there..\n\nregards,\nYeb\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 06 Mar 2012 10:34:48 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD and RAID"
},
{
"msg_contents": "On 03/06/2012 10:34 AM, Yeb Havinga wrote:\n> On 2012-03-06 09:34, Andrea Suisani wrote:\n>> On 03/06/2012 09:17 AM, Yeb Havinga wrote:\n>>>\n>>> PS: we applied the same philosophy (different brands) also to motherboards, io controllers and memory, but after testing, we liked one IO controllers software so much more than the other so we chose to have only one. Also stream memory performance of one motherboard showed a significant performance\n>>> regression in the higher thread counts that we decided to go for the other brand for all servers.\n>>>\n>>\n>> care to share motherboard winning model?\n>>\n>> thanks\n>> Andrea\n>>\n>\n> On http://i.imgur.com/vfmvu.png is a graph of three systems, made with the multi stream scaling (average of 10 tests if I remember correctly) test.\n>\n> The red and blue are 2 X 12 core opteron 6168 systems with 64 GB DDR3 1333MHz in 8GB dimms\n>\n> Red is a Tyan S8230\n> Blue is a Supermicro H8DGI-G\n>\n> We tried a lot of things to rule out motherboards, such as swap memory of both systems, ensure BIOS settings are similar (e.g. ECC mode), update to latest BIOS where possible, but none of those settings improved the memory performance drop. Both systems were installed with kickstarted Centos 6.2, so\n> also no kernel setting differences there..\n>\n> regards,\n> Yeb\n\n\nthanks for sharing those infos\n\nAndrea\n",
"msg_date": "Tue, 06 Mar 2012 10:56:22 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD and RAID"
},
{
"msg_contents": "On 06/03/12 21:17, Yeb Havinga wrote:\n>\n>\n> One thing to note is that linux software raid with md doesn't support \n> discard, which might shorten the drive's expected lifetime. To get \n> some numbers I tested the raid 1 of ssd's setup for mediawear under a \n> PostgreSQL load earlier, see \n> http://archives.postgresql.org/pgsql-general/2011-11/msg00141.php\n>\n>\n\nRight, which is a bit of a pain - we are considering either formatting \nthe drive with less capacity and using md RAID 1 or else doing the \nmirror in LVM to enable a working discard/trim.\n\nRegards\n\nMark\n\n\n\n",
"msg_date": "Wed, 07 Mar 2012 13:36:53 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSD and RAID"
},
{
"msg_contents": "On 2012-03-07 01:36, Mark Kirkwood wrote:\n> On 06/03/12 21:17, Yeb Havinga wrote:\n>>\n>>\n>> One thing to note is that linux software raid with md doesn't support \n>> discard, which might shorten the drive's expected lifetime. To get \n>> some numbers I tested the raid 1 of ssd's setup for mediawear under a \n>> PostgreSQL load earlier, see \n>> http://archives.postgresql.org/pgsql-general/2011-11/msg00141.php\n>>\n>>\n>\n> Right, which is a bit of a pain - we are considering either formatting \n> the drive with less capacity and using md RAID 1 or else doing the \n> mirror in LVM to enable a working discard/trim.\n\nWhen I measured the write durability without discard on the enterprise \ndisks, I got numbers that in normal production use would outlive the \nlifetime of the servers. It would be interesting to see durability \nnumbers for the desktop SSDs, even when partitioned to a part of the disk.\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Wed, 07 Mar 2012 10:45:09 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD and RAID"
}
] |
[
{
"msg_contents": "I've complained many times that\nselect (f()).*;\n\nwill execute f() once for each returned field of f() since the server\nessentially expands that into:\n\nselect f().a, f().b;\n\ntry it yourself, see:\ncreate function f(a out text, b out text) returns record as $$\nbegin\n perform pg_sleep(1);\n a := 'a'; b := 'b'; end;\n$$ language plpgsql immutable;\n\nIf f returns a,b etc. This is true if the function f() is marked\nstable or immutable. That it does this for immutable functions is\npretty awful but it's the stable case that I find much more\ninteresting -- most non-trivial functions that read from the database\nare stable. Shouldn't the server be able to detect that function only\nneeds to be run once? By the way, this isn't just happening with\nfunction calls. I've noticed the same behavior in queries like this:\n\ncreate view v as\n select\n (select foo from foo where ...) as foo_1,\n (select foo from foo where ...) as foo_2,\n from complicated_query;\n\nthat when you query from v, you can sometimes see exploding subplans\nsuch that when you pull a field from foo_1, it reruns the lookup on\nfoo.\n\nSo my question is this:\nCan stable functions and other similar query expressions be optimized\nso that they are not repeat evaluated like that without breaking\nanything?\n\nmerlin\n",
"msg_date": "Mon, 5 Mar 2012 17:15:39 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Repeat execution of stable expressions"
},
{
"msg_contents": "On Mon, Mar 5, 2012 at 3:15 PM, Merlin Moncure <[email protected]> wrote:\n> I've complained many times that\n> select (f()).*;\n>\n> will execute f() once for each returned field of f() since the server\n> essentially expands that into:\n>\n> select f().a, f().b;\n>\n\noh, this is why we expand rows inside a WITH statement.\n\nit should probably be fixed, but you should find something like\n\nWITH fn AS SELECT f(),\nSELECT (fn).a, (fn).b\n\nwill make your life better\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Mon, 5 Mar 2012 16:41:39 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Repeat execution of stable expressions"
},
{
"msg_contents": "On Mon, Mar 5, 2012 at 6:41 PM, Peter van Hardenberg <[email protected]> wrote:\n> On Mon, Mar 5, 2012 at 3:15 PM, Merlin Moncure <[email protected]> wrote:\n>> I've complained many times that\n>> select (f()).*;\n>>\n>> will execute f() once for each returned field of f() since the server\n>> essentially expands that into:\n>>\n>> select f().a, f().b;\n>>\n>\n> oh, this is why we expand rows inside a WITH statement.\n>\n> it should probably be fixed, but you should find something like\n>\n> WITH fn AS SELECT f(),\n> SELECT (fn).a, (fn).b\n>\n> will make your life better\n\nsure, but WITH is an optimization fence. I use a lot of views, and if\nyou wrap your view with WITH, then your quals won't get pushed\nthrough. ditto if you use the 'OFFSET 0' hack to keep the subquery\nfrom being flattened out.\n\nmerlin\n",
"msg_date": "Tue, 6 Mar 2012 09:00:50 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Repeat execution of stable expressions"
},
{
"msg_contents": "hi,\n\n> I've complained many times that\n> select (f()).*;\n> \n> will execute f() once for each returned field of f() since the server\n> essentially expands that into:\n> \n> select f().a, f().b;\n> \n> try it yourself, see:\n> create function f(a out text, b out text) returns record as $$\n> begin\n> perform pg_sleep(1);\n> a := 'a'; b := 'b'; end;\n> $$ language plpgsql immutable;\n\n\ni ran into this regularly too. when f() is expensive then i try to rewrite the query so that the\nfunction only get called once per row.\n\n# explain analyze select (f()).*;\n QUERY PLAN \n------------------------------------------------------------------------------------------\n Result (cost=0.00..0.51 rows=1 width=0) (actual time=2001.116..2001.117 rows=1 loops=1)\n Total runtime: 2001.123 ms\n\n# explain analyze select f.* from f() as f;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------\n Function Scan on f (cost=0.25..0.26 rows=1 width=64) (actual time=1000.928..1000.928 rows=1 loops=1)\n Total runtime: 1000.937 ms\n\nregards, jan\n",
"msg_date": "Tue, 06 Mar 2012 17:21:45 +0100",
"msg_from": "Jan Otto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Repeat execution of stable expressions"
},
{
"msg_contents": "On Tue, Mar 6, 2012 at 10:21 AM, Jan Otto <[email protected]> wrote:\n> hi,\n>\n>> I've complained many times that\n>> select (f()).*;\n>>\n>> will execute f() once for each returned field of f() since the server\n>> essentially expands that into:\n>>\n>> select f().a, f().b;\n>>\n>> try it yourself, see:\n>> create function f(a out text, b out text) returns record as $$\n>> begin\n>> perform pg_sleep(1);\n>> a := 'a'; b := 'b'; end;\n>> $$ language plpgsql immutable;\n>\n>\n> i ran into this regularly too. when f() is expensive then i try to rewrite the query so that the\n> function only get called once per row.\n>\n> # explain analyze select (f()).*;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> Result (cost=0.00..0.51 rows=1 width=0) (actual time=2001.116..2001.117 rows=1 loops=1)\n> Total runtime: 2001.123 ms\n>\n> # explain analyze select f.* from f() as f;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------\n> Function Scan on f (cost=0.25..0.26 rows=1 width=64) (actual time=1000.928..1000.928 rows=1 loops=1)\n> Total runtime: 1000.937 ms\n\nyeah -- that's pretty neat, but doesn't seem fit a lot of the cases I\nbump into. In particular, when stuffing composite types in the field\nlist. You need the type to come back as a scalar so you can expand it\na wrapper (especially when layering views).\n\nmerlin\n",
"msg_date": "Tue, 6 Mar 2012 13:29:16 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Repeat execution of stable expressions"
}
] |
[
{
"msg_contents": "We are looking for advice on the RAID controller cards with battery back cache for 10K SAS disks with RAID 10. Any suggestion for the best card and settings to use for heavy write transactions.\n\nSee Sing\nWhitepages.com\n\n\n We are looking for advice on the RAID controller cards with battery back cache for 10K SAS disks with RAID 10. Any suggestion for the best card and settings to use for heavy write transactions. See SingWhitepages.com",
"msg_date": "Wed, 7 Mar 2012 09:49:23 -0800",
"msg_from": "See Sing Lau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice on Controller card for SAS disks"
},
{
"msg_contents": "On 7 Březen 2012, 18:49, See Sing Lau wrote:\n>\n> We are looking for advice on the RAID controller cards with battery back\n> cache for 10K SAS disks with RAID 10. Any suggestion for the best card\n> and settings to use for heavy write transactions.\n\nIt's hard to give you advices when we know nothing about your environment\n(e.g. what OS) and other requirements (how many drives, what does 'heavy\nwrite transactions' means etc.).\n\nI'd personally vote for LSI/3Ware, Areca or HighPoint controllers.\n\nTomas\n\n",
"msg_date": "Mon, 12 Mar 2012 17:10:28 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on Controller card for SAS disks"
},
{
"msg_contents": "On Mon, Mar 12, 2012 at 10:10 AM, Tomas Vondra <[email protected]> wrote:\n> On 7 Březen 2012, 18:49, See Sing Lau wrote:\n>>\n>> We are looking for advice on the RAID controller cards with battery back\n>> cache for 10K SAS disks with RAID 10. Any suggestion for the best card\n>> and settings to use for heavy write transactions.\n>\n> It's hard to give you advices when we know nothing about your environment\n> (e.g. what OS) and other requirements (how many drives, what does 'heavy\n> write transactions' means etc.).\n>\n> I'd personally vote for LSI/3Ware, Areca or HighPoint controllers.\n\nNever used highpoint contollers for RAID, just for simple HBAs in home\nmachines. The LSI/3Ware and Arecas are pretty much the best battery\nbacked caching controllers. Apparently Adaptec and HP have done some\ncatching up in the last few years.\n\nI prefer the 16xx and 18xx series Arecas because they have their own\nnetwork interfaces and can send alerts separate from outside\nmonitoring, and you can configure them remotely via web interface. On\nlinux boxes this is a big step up from the horrific megacli command\nline util on LSIs. The GUI interface on the LSIs are pretty crappy\ntoo. 3Wares use the tw_cli utility which is easy to use and self\ndocumenting.\n",
"msg_date": "Mon, 12 Mar 2012 10:43:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on Controller card for SAS disks"
}
] |
[
{
"msg_contents": "I've taken the liberty of reposting this message as my addendum to a\nlong thread that I started on the subject of adding a new db server to\nour existing 4-year old workhorse got lost in discussion.\n\nOur workload is several small databases totalling less than 40GB of disk\nspace. The proposed system has 48GB RAM, 2 * quad core E5620 @ 2.40GHz\nand 4 WD Raptors behind an LSI SAS card. Our supplier has just run a set\nof tests on the machine we intend to buy. The test rig had the following\nsetup:\n\nLSI MegaRAID SAS 9260-8i\nFirmware: 12.12.0-0090\nKernel: 2.6.39.4\nHard disks: 4x WD6000BLHX\nTest done on 256GB volume\nBS = blocksize in bytes\n\nThe test tool is fio. I'd be grateful to know if the results below are\nconsidered acceptable. An ancillary question is whether a 4096 block\nsize is a good idea. I suppose we will be using XFS which I understand\nhas a default block size of 4096 bytes. \n\nRAID 10\n--------------------------------------\nRead sequential\n\n BS MB/s IOPs\n 512 0129.26 264730.80\n 1024 0229.75 235273.40\n 4096 0363.14 092965.50\n 16384 0475.02 030401.50\n 65536 0472.79 007564.65\n131072 0428.15 003425.20\n--------------------------------------\nWrite sequential\n\n BS MB/s IOPs\n 512 0036.08 073908.00\n 1024 0065.61 067192.60\n 4096 0170.15 043560.40\n 16384 0219.80 014067.57\n 65536 0240.05 003840.91\n131072 0243.96 001951.74\n--------------------------------------\nRandom read\n\n BS MB/s IOPs\n 512 0001.50 003077.20\n 1024 0002.91 002981.40\n 4096 0011.59 002968.30\n 16384 0044.50 002848.28\n 65536 0156.96 002511.41\n131072 0170.65 001365.25\n--------------------------------------\nRandom write\n\n BS MB/s IOPs\n 512 0000.53 001103.60\n 1024 0001.15 001179.20\n 4096 0004.43 001135.30\n 16384 0017.61 001127.56\n 65536 0061.39 000982.39\n131072 0079.27 000634.16\n--------------------------------------\n\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Fri, 9 Mar 2012 11:15:58 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comments requested on IO performance : new db server"
},
{
"msg_contents": "On Fri, Mar 9, 2012 at 5:15 AM, Rory Campbell-Lange\n<[email protected]> wrote:\n> I've taken the liberty of reposting this message as my addendum to a\n> long thread that I started on the subject of adding a new db server to\n> our existing 4-year old workhorse got lost in discussion.\n>\n> Our workload is several small databases totalling less than 40GB of disk\n> space. The proposed system has 48GB RAM, 2 * quad core E5620 @ 2.40GHz\n> and 4 WD Raptors behind an LSI SAS card. Our supplier has just run a set\n> of tests on the machine we intend to buy. The test rig had the following\n> setup:\n>\n> LSI MegaRAID SAS 9260-8i\n> Firmware: 12.12.0-0090\n> Kernel: 2.6.39.4\n> Hard disks: 4x WD6000BLHX\n> Test done on 256GB volume\n> BS = blocksize in bytes\n>\n> The test tool is fio. I'd be grateful to know if the results below are\n> considered acceptable. An ancillary question is whether a 4096 block\n> size is a good idea. I suppose we will be using XFS which I understand\n> has a default block size of 4096 bytes.\n>\n> RAID 10\n> --------------------------------------\n> Read sequential\n>\n> BS MB/s IOPs\n> 512 0129.26 264730.80\n> 1024 0229.75 235273.40\n> 4096 0363.14 092965.50\n> 16384 0475.02 030401.50\n> 65536 0472.79 007564.65\n> 131072 0428.15 003425.20\n> --------------------------------------\n> Write sequential\n>\n> BS MB/s IOPs\n> 512 0036.08 073908.00\n> 1024 0065.61 067192.60\n> 4096 0170.15 043560.40\n> 16384 0219.80 014067.57\n> 65536 0240.05 003840.91\n> 131072 0243.96 001951.74\n> --------------------------------------\n> Random read\n>\n> BS MB/s IOPs\n> 512 0001.50 003077.20\n> 1024 0002.91 002981.40\n> 4096 0011.59 002968.30\n> 16384 0044.50 002848.28\n> 65536 0156.96 002511.41\n> 131072 0170.65 001365.25\n> --------------------------------------\n> Random write\n>\n> BS MB/s IOPs\n> 512 0000.53 001103.60\n> 1024 0001.15 001179.20\n> 4096 0004.43 001135.30\n> 16384 0017.61 001127.56\n> 65536 0061.39 000982.39\n> 131072 0079.27 000634.16\n> --------------------------------------\n\nsince your RAM is larger than the database size, read performance is\nessentially a non-issue. your major gating factors are going to be\ncpu bound queries and random writes -- 1000 IOPS essentially puts an\nupper bound on your write TPS, especially if your writes are frequent\nand randomly distributed, the case that is more or less simulated by\npgbench with large scaling factors.\n\nNow, 1000 write tps is quite alot (3.6 mil transactions/hour) and\nyour workload will drive the hardware consideration.\n\nmerlin\n",
"msg_date": "Fri, 9 Mar 2012 10:11:52 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comments requested on IO performance : new db server"
},
{
"msg_contents": "On 09/03/12, Merlin Moncure ([email protected]) wrote:\n> On Fri, Mar 9, 2012 at 5:15 AM, Rory Campbell-Lange\n> <[email protected]> wrote:\n> > I've taken the liberty of reposting this message as my addendum to a\n> > long thread that I started on the subject of adding a new db server to\n> > our existing 4-year old workhorse got lost in discussion.\n> >\n> > Our workload is several small databases totalling less than 40GB of disk\n> > space. The proposed system has 48GB RAM, 2 * quad core E5620 @ 2.40GHz\n> > and 4 WD Raptors behind an LSI SAS card. Our supplier has just run a set\n> > of tests on the machine we intend to buy. The test rig had the following\n> > setup:\n> >\n> > LSI MegaRAID SAS 9260-8i\n> > Firmware: 12.12.0-0090\n> > Kernel: 2.6.39.4\n> > Hard disks: 4x WD6000BLHX\n> > Test done on 256GB volume\n> > BS = blocksize in bytes\n> >\n> > The test tool is fio. I'd be grateful to know if the results below are\n> > considered acceptable. An ancillary question is whether a 4096 block\n> > size is a good idea. I suppose we will be using XFS which I understand\n> > has a default block size of 4096 bytes.\n> >\n> > RAID 10\n> > --------------------------------------\n...\n> > --------------------------------------\n> > Random write\n> >\n> > � �BS � � � � � MB/s � � � � � � IOPs\n> > � 512 � � � �0000.53 � � � �001103.60\n> > �1024 � � � �0001.15 � � � �001179.20\n> > �4096 � � � �0004.43 � � � �001135.30\n> > �16384 � � � �0017.61 � � � �001127.56\n> > �65536 � � � �0061.39 � � � �000982.39\n> > 131072 � � � �0079.27 � � � �000634.16\n> > --------------------------------------\n> \n> since your RAM is larger than the database size, read performance is\n> essentially a non-issue. your major gating factors are going to be\n> cpu bound queries and random writes -- 1000 IOPS essentially puts an\n> upper bound on your write TPS, especially if your writes are frequent\n> and randomly distributed, the case that is more or less simulated by\n> pgbench with large scaling factors.\n> \n> Now, 1000 write tps is quite alot (3.6 mil transactions/hour) and\n> your workload will drive the hardware consideration.\n\nThanks for your comments, Merlin. With regard to the \"gating factors\" I\nbelieve the following is pertinent:\n\nCPU\n\nMy current server has 2 * quad Xeon E5420 @ 2.50GHz. The server\noccasionally reaches 20% sutained utilisation according to sar.\nThis cpu has a \"passmark\" of 7,730. \nhttp://www.cpubenchmark.net/cpu_lookup.php?cpu=[Dual+CPU]+Intel+Xeon+E5420+%40+2.50GHz\n\nMy proposed CPU is an E5620 @ 2.40GHz with CPU \"passmark\" of 9,620\nhttp://www.cpubenchmark.net/cpu_lookup.php?cpu=[Dual+CPU]+Intel+Xeon+E5620+%40+2.40GHz\n\nSince the workload will be very similar I'm hoping for about 20% better\nCPU performance from the new server, which should drop max CPU load by\n5% or so.\n\nRandom Writes\n\nI'll have to test this. My current server (R10 4*15K SCSI) produced the\nfollowing pgbench stats while running its normal workload:\n \n -c -t TPS\n 5 20000 446\n 10 10000 542\n 20 5000 601\n 30 3333 647\n\nI'd be grateful to know what parameters I should use for a \"large\nscaling factor\" pgbench test.\n\nMany thanks\nRory\n\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Sat, 10 Mar 2012 10:19:29 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comments requested on IO performance : new db server"
},
{
"msg_contents": "Is a block size of 4096 a good idea both for the filesystem and\npostgresql? The analysis here:\nhttp://www.fuzzy.cz/en/articles/benchmark-results-hdd-read-write-pgbench/ \nappears to suggest that at least for database block sizes of 4096\nread/write performance is much higher than for smaller block sizes.\n\nRory\n\nOn 09/03/12, Rory Campbell-Lange ([email protected]) wrote:\n> ...An ancillary question is whether a 4096 block size is a good idea.\n> I suppose we will be using XFS which I understand has a default block\n> size of 4096 bytes. \n> \n> RAID 10\n> --------------------------------------\n> Read sequential\n> \n> BS MB/s IOPs\n> 512 0129.26 264730.80\n> 1024 0229.75 235273.40\n> 4096 0363.14 092965.50\n> 16384 0475.02 030401.50\n> 65536 0472.79 007564.65\n> 131072 0428.15 003425.20\n> --------------------------------------\n> Write sequential\n> \n> BS MB/s IOPs\n> 512 0036.08 073908.00\n> 1024 0065.61 067192.60\n> 4096 0170.15 043560.40\n> 16384 0219.80 014067.57\n> 65536 0240.05 003840.91\n> 131072 0243.96 001951.74\n> --------------------------------------\n> Random read\n> \n> BS MB/s IOPs\n> 512 0001.50 003077.20\n> 1024 0002.91 002981.40\n> 4096 0011.59 002968.30\n> 16384 0044.50 002848.28\n> 65536 0156.96 002511.41\n> 131072 0170.65 001365.25\n> --------------------------------------\n> Random write\n> \n> BS MB/s IOPs\n> 512 0000.53 001103.60\n> 1024 0001.15 001179.20\n> 4096 0004.43 001135.30\n> 16384 0017.61 001127.56\n> 65536 0061.39 000982.39\n> 131072 0079.27 000634.16\n> --------------------------------------\n-- \nRory Campbell-Lange\[email protected]\n\nCampbell-Lange Workshop\nwww.campbell-lange.net\n0207 6311 555\n3 Tottenham Street London W1T 2AF\nRegistered in England No. 04551928\n",
"msg_date": "Sat, 10 Mar 2012 10:51:12 +0000",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comments requested on IO performance : new db server"
},
{
"msg_contents": "On 10.3.2012 11:51, Rory Campbell-Lange wrote:\n> Is a block size of 4096 a good idea both for the filesystem and\n> postgresql? The analysis here:\n> http://www.fuzzy.cz/en/articles/benchmark-results-hdd-read-write-pgbench/ \n> appears to suggest that at least for database block sizes of 4096\n> read/write performance is much higher than for smaller block sizes.\n\nHi,\n\ninterpreting those results is a bit tricky for several reasons. First,\nthose are 'average results' for all filesystems (and the behavior of\nfilesystems may vary significantly). I'd recommend checking results for\nthe filesystem you're going to use (http://www.fuzzy.cz/bench)\n\nSecond, the article discusses just TPC-B (OLTP-like) workload results.\nIt's quite probable your workload is going to mix that with other\nworkload types (e.g. DSS/DWH). And that's exactly where larger block\nsizes are better.\n\nTo me, 8kB seems like a good compromise. Don't use other block sizes\nunless you actually test the benefits for your workload.\n\nTomas\n\n> \n> Rory\n> \n> On 09/03/12, Rory Campbell-Lange ([email protected]) wrote:\n>> ...An ancillary question is whether a 4096 block size is a good idea.\n>> I suppose we will be using XFS which I understand has a default block\n>> size of 4096 bytes. \n>>\n>> RAID 10\n>> --------------------------------------\n>> Read sequential\n>>\n>> BS MB/s IOPs\n>> 512 0129.26 264730.80\n>> 1024 0229.75 235273.40\n>> 4096 0363.14 092965.50\n>> 16384 0475.02 030401.50\n>> 65536 0472.79 007564.65\n>> 131072 0428.15 003425.20\n>> --------------------------------------\n>> Write sequential\n>>\n>> BS MB/s IOPs\n>> 512 0036.08 073908.00\n>> 1024 0065.61 067192.60\n>> 4096 0170.15 043560.40\n>> 16384 0219.80 014067.57\n>> 65536 0240.05 003840.91\n>> 131072 0243.96 001951.74\n>> --------------------------------------\n>> Random read\n>>\n>> BS MB/s IOPs\n>> 512 0001.50 003077.20\n>> 1024 0002.91 002981.40\n>> 4096 0011.59 002968.30\n>> 16384 0044.50 002848.28\n>> 65536 0156.96 002511.41\n>> 131072 0170.65 001365.25\n>> --------------------------------------\n>> Random write\n>>\n>> BS MB/s IOPs\n>> 512 0000.53 001103.60\n>> 1024 0001.15 001179.20\n>> 4096 0004.43 001135.30\n>> 16384 0017.61 001127.56\n>> 65536 0061.39 000982.39\n>> 131072 0079.27 000634.16\n>> --------------------------------------\n\n",
"msg_date": "Sat, 10 Mar 2012 16:12:39 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comments requested on IO performance : new db server"
}
] |
[
{
"msg_contents": "I just upgraded to 9.1 and downloaded EnterpriseDB Tuning wizard, but the\nlist of servers in ComboBox support only 8.2, 8.3 i 8.4 version and x86\nbuild. How can I tune 9.1, 64 bit version?\n\nIs there any workaround, other version for download... ?\n\n \n\nAny help?\n\n \n\nMichael.\n\n \n\n\nI just upgraded to 9.1 and downloaded EnterpriseDB Tuning wizard, but the list of servers in ComboBox support only 8.2, 8.3 i 8.4 version and x86 build. How can I tune 9.1, 64 bit version?Is there any workaround, other version for download... ? Any help? Michael.",
"msg_date": "Fri, 9 Mar 2012 16:07:35 +0100",
"msg_from": "\"Michael Kopljan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning wizard"
},
{
"msg_contents": "On 9 Březen 2012, 16:07, Michael Kopljan wrote:\n> I just upgraded to 9.1 and downloaded EnterpriseDB Tuning wizard, but the\n> list of servers in ComboBox support only 8.2, 8.3 i 8.4 version and x86\n> build. How can I tune 9.1, 64 bit version?\n>\n> Is there any workaround, other version for download... ?\n\nGiven that Tuning wizard is an EntepriseDB tool, a more appropriate place\nfor this question is probably\nhttp://forums.enterprisedb.com/forums/list.page\n\n\nTomas\n\n",
"msg_date": "Mon, 12 Mar 2012 17:03:59 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning wizard"
},
{
"msg_contents": "On Mon, Mar 12, 2012 at 10:03 AM, Tomas Vondra <[email protected]> wrote:\n> On 9 Březen 2012, 16:07, Michael Kopljan wrote:\n>> I just upgraded to 9.1 and downloaded EnterpriseDB Tuning wizard, but the\n>> list of servers in ComboBox support only 8.2, 8.3 i 8.4 version and x86\n>> build. How can I tune 9.1, 64 bit version?\n>>\n>> Is there any workaround, other version for download... ?\n>\n> Given that Tuning wizard is an EntepriseDB tool, a more appropriate place\n> for this question is probably\n> http://forums.enterprisedb.com/forums/list.page\n\nThis thread from 30 November 2011 seems to acknowledge there's a problem:\n\nhttp://forums.enterprisedb.com/posts/list/2973.page\n",
"msg_date": "Mon, 12 Mar 2012 10:37:54 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning wizard"
},
{
"msg_contents": "On Mon, Mar 12, 2012 at 10:07 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Mar 12, 2012 at 10:03 AM, Tomas Vondra <[email protected]> wrote:\n> > On 9 Březen 2012, 16:07, Michael Kopljan wrote:\n> >> I just upgraded to 9.1 and downloaded EnterpriseDB Tuning wizard, but\n> the\n> >> list of servers in ComboBox support only 8.2, 8.3 i 8.4 version and x86\n> >> build. How can I tune 9.1, 64 bit version?\n> >>\n> >> Is there any workaround, other version for download... ?\n> >\n> > Given that Tuning wizard is an EntepriseDB tool, a more appropriate place\n> > for this question is probably\n> > http://forums.enterprisedb.com/forums/list.page\n>\n> This thread from 30 November 2011 seems to acknowledge there's a problem:\n>\n> http://forums.enterprisedb.com/posts/list/2973.page\n\n\nTrue, its in feature enhancement, soon we expect a new release with PG 9.1\ncompatible.\n\n---\nRegards,\nRaghavendra\nEnterpriseDB Corporation\nBlog: http://raghavt.blogspot.com/\n\nOn Mon, Mar 12, 2012 at 10:07 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Mar 12, 2012 at 10:03 AM, Tomas Vondra <[email protected]> wrote:\n> On 9 Březen 2012, 16:07, Michael Kopljan wrote:\n>> I just upgraded to 9.1 and downloaded EnterpriseDB Tuning wizard, but the\n>> list of servers in ComboBox support only 8.2, 8.3 i 8.4 version and x86\n>> build. How can I tune 9.1, 64 bit version?\n>>\n>> Is there any workaround, other version for download... ?\n>\n> Given that Tuning wizard is an EntepriseDB tool, a more appropriate place\n> for this question is probably\n> http://forums.enterprisedb.com/forums/list.page\n\nThis thread from 30 November 2011 seems to acknowledge there's a problem:\n\nhttp://forums.enterprisedb.com/posts/list/2973.page\n\nTrue, its in feature enhancement, soon we expect a new release with PG 9.1 compatible.---Regards,RaghavendraEnterpriseDB CorporationBlog: http://raghavt.blogspot.com/",
"msg_date": "Tue, 13 Mar 2012 06:25:11 +0530",
"msg_from": "Raghavendra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning wizard"
}
] |
[
{
"msg_contents": "It is a discussion about the transaction ID wraparound in PostgreSQL.\nHowever, what is the fundamental definition if transaction ID.\nselect * from table where ID=1:10000 \nit is consider as one transaction or 10000 transactions.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/count-on-transaction-ID-tp5550894p5550894.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nIt is a discussion about the transaction ID wraparound in PostgreSQL.\nHowever, what is the fundamental definition if transaction ID.\nselect * from table where ID=1:10000 \nit is consider as one transaction or 10000 transactions.\n\t\n\nView this message in context: count on transaction ID\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Fri, 9 Mar 2012 07:30:29 -0800 (PST)",
"msg_from": "ddgs <[email protected]>",
"msg_from_op": true,
"msg_subject": "count on transaction ID"
},
{
"msg_contents": "ddgs <[email protected]> wrote:\n \n> It is a discussion about the transaction ID wraparound in\n> PostgreSQL.\n \nHopefully you've seen this:\n \nhttp://www.postgresql.org/docs/current/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n \n> However, what is the fundamental definition if transaction ID.\n \nAs the cited page states, it is 32 bits. It is considered a\n\"circular\" number space.\n \n> select * from table where ID=1:10000\n> it is consider as one transaction or 10000 transactions.\n \nIn SQL it would be a syntax error; it doesn't really mean anything. \nAnd it seems that you may be confused about the difference between\ntransaction IDs, object IDs, and user-defined ID columns on tables. \nWhat is the \"ID=1:10000\" syntax supposed to mean?\n \n-Kevin\n",
"msg_date": "Mon, 12 Mar 2012 09:40:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count on transaction ID"
},
{
"msg_contents": "It is just a simple idea syntax, not the exact one. \nAnyway, I am wonder how to get the 2^31 transaction IDs to cause the failure\nBut I get the wraparound error warning when I delete a large no. of rows.\nSo the wraparound failure is due to what reason, that I still have no idea\n(at least not the transaction limit, I guess) \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/count-on-transaction-ID-tp5550894p5558198.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 12 Mar 2012 08:46:53 -0700 (PDT)",
"msg_from": "ddgs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count on transaction ID"
},
{
"msg_contents": "ddgs <[email protected]> wrote:\n \n> It is just a simple idea syntax, not the exact one.\n \nThen it doesn't seem possible to give an exact answer as to what it\nwill do. The effect on transaction IDs will depend on whether\nyou're talking about one DELETE statement with a range of values in\nthe WHERE clause or a series of DELETE statements, one for each row,\nwhich are not bounded by a transaction through some other means.\n \n> Anyway, I am wonder how to get the 2^31 transaction IDs to cause\n> the failure\n \nOne of the easiest ways to get to such a failure is to disable\nautovacuum or make it less aggressive.\n \n> But I get the wraparound error warning when I delete a large no.\n> of rows. So the wraparound failure is due to what reason, that I\n> still have no idea (at least not the transaction limit, I guess) \n \n>From what little information you've provided, it's hard to tell. It\nmight be that the DELETEs are generating a very large number of\ndatabase transactions, each of which is consuming a transaction ID. \nIt might be that your DELETE is running for so long that it's\ninterfering with autovacuum's ability to clean up after a high\nvolume of other transactions. It could be that you are simply not\nvacuuming aggressively enough, and the DELETE happened to come along\nat the point where the issue became critical, and is thus an\n\"innocent bystander\" and not the culprit.\n \n-Kevin\n",
"msg_date": "Mon, 12 Mar 2012 11:05:51 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count on transaction ID"
},
{
"msg_contents": "Is it a good starting point to the basic reason of doing vacuum?\nfrom the manual,\n\"PostgreSQL's VACUUM command must be run on a regular basis for several\nreasons:\nTo recover disk space occupied by updated or deleted rows.\nTo update data statistics used by the PostgreSQL query planner.\nTo protect against loss of very old data due to transaction ID wraparound.\n\"\n\nIt seems that only transaction ID wraparound can cause system failure in\nloading-in new data.\nFor the dead rows problem, it just a matter of space; the data statistics is\njust relate to the query performance.\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/count-on-transaction-ID-tp5550894p5558345.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 12 Mar 2012 09:34:37 -0700 (PDT)",
"msg_from": "ddgs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count on transaction ID"
},
{
"msg_contents": "ddgs <[email protected]> wrote:\n \n> Is it a good starting point to the basic reason of doing vacuum?\n> from the manual,\n> \"PostgreSQL's VACUUM command must be run on a regular basis for\n> several reasons:\n> To recover disk space occupied by updated or deleted rows.\n> To update data statistics used by the PostgreSQL query planner.\n> To protect against loss of very old data due to transaction ID\n> wraparound.\n> \"\n \nThe entire page from which that quote is pulled makes a good\nstarting point. Turning off autovacuum or making it less aggressive\nwithout really understanding everything on that page will generally\nresult in problems which take far longer to solve than any marginal\nsavings obtained by making the change.\n \n-Kevin\n",
"msg_date": "Mon, 12 Mar 2012 11:56:54 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count on transaction ID"
}
] |
[
{
"msg_contents": "Hello,\nDue to a large database size, my weekend database backup (pg_dumpall) takes about 12 hours to complete. Additionally, I add the --no-unlogged-table-data option to skip any volatile tables. One UNLOGGED table in the database is completely regenerated every day with a TRUNCATE TABLE/INSERT command. However, despite the --no-unlogged-table-data option, the table still gets locked by the pg_dump(all), preventing the operation from completing for several hours during backups.\n\nTo the extent possible, I would like to request a way to prevent certain (UNLOGGED) tables from being locked against truncate/schema changes for the extent of an entire database backup.\n\nFor example:\n. pg_dump does not lock unlogged tables if --no-unlogged-table-data is set \n. pg_dump supports sequential table locks / unlocks during backups (only the table that is currently being copied is locked, rather than all tables for the entire backup).\n. Support a way to automatically replace the TRUNCATE command with DELETE if TRUNCATE cannot immediately obtain a lock.\n\nThanks, Robert\n\nRobert McGehee, CFA\nGeode Capital Management, LLC\nOne Post Office Square, 28th Floor | Boston, MA | 02109\nDirect: (617)392-8396\n\nThis e-mail, and any attachments hereto, are intended for use by the addressee(s) only and may contain information that is (i) confidential information of Geode Capital Management, LLC and/or its affiliates, and/or (ii) proprietary information of Geode Capital Management, LLC and/or its affiliates. If you are not the intended recipient of this e-mail, or if you have otherwise received this e-mail in error, please immediately notify me by telephone (you may call collect), or by e-mail, and please permanently delete the original, any print outs and any copies of the foregoing. Any dissemination, distribution or copying of this e-mail is strictly prohibited. \n\n",
"msg_date": "Sun, 11 Mar 2012 13:38:04 -0400",
"msg_from": "\"McGehee, Robert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Feature Request: No pg_dump lock on unlogged tables"
}
] |
[
{
"msg_contents": "I'm using gin index for my full text search engine in production. These \ndays the gin index size grows to 20-30G and the system started to suffer \nwith periodical insert hanging. This is same as described in the 2 posts:\nhttp://postgresql.1045698.n5.nabble.com/Random-penalties-on-GIN-index-updates-td2073848.html\nhttp://postgresql.1045698.n5.nabble.com/Periodically-slow-inserts-td3230434.html\n\nThe gin index is on a dedicated raid 10 SAS disk and the performance \nshould be enough for normal db operation. But I always see almost 100% \ndisk utiliztion on the disk when the inserts hang. The utiliztion for \nother data(such as the full text table data) on another disk(same setup \nas the gin index disk: SAS raid 10) is quite low comparing with the gin \nindex disk. From my observation, looks too much data is written to the \ndisk when the pending list of gin index is flushed to the disk. Below is \nthe outupt of 'iostat -xm 3' on the disk when inserts hang:\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsde 0.00 0.00 0.67 2614.00 0.08 22.94 \n18.03 32.94 12.61 0.38 100.00\nsde 0.00 0.00 1.67 2377.33 0.17 20.43 \n17.73 32.00 13.44 0.42 100.00\nsde 0.00 0.00 15.67 2320.33 0.23 20.13 \n17.85 31.99 13.73 0.43 100.00\nsde 0.00 0.00 7.33 1525.00 0.12 14.02 \n18.90 32.00 20.83 0.65 100.00\nsde 0.00 0.00 14.33 1664.67 0.12 15.54 \n19.10 32.00 19.06 0.60 100.00\nsde 0.00 0.00 5.33 1654.33 0.04 12.07 \n14.94 32.00 19.22 0.60 100.00\n\nI tried to increase work_mem but the inserts hang more time each time \nwith less frequency. So it makes almost no difference for the total \nhanging time. Frequent vacuum is not a choice since the hang happens \nvery 3-5 mins. is there any improvement I can make with pg for such data \nvolumn(still increasing) or it's time to turn to other full text search \nsolution such as lucene etc?\n",
"msg_date": "Tue, 13 Mar 2012 13:43:03 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Gin index insert performance issue"
},
{
"msg_contents": "On 13/03/12 06:43, Rural Hunter wrote:\n> I tried to increase work_mem but the inserts hang more time each time\n> with less frequency. So it makes almost no difference for the total\n> hanging time. Frequent vacuum is not a choice since the hang happens\n> very 3-5 mins. is there any improvement I can make with pg for such\n> data volumn(still increasing) or it's time to turn to other full text\n> search solution such as lucene etc?\n\nWe're using gin for fts-search, current index-size is up to 149GB and yes\nthe update process is quite tough on the disk-io-subsystem.\n\nWhat you're experiencing is filling of the fastupdate queue, thats being\nflushed. Setting wok_mem higher causes the system to stall for longer\nperiod less frequent and has a side cost on queries that need to go through\nthe pending list (that is bigger) in addition to the index-search. To me\nit seems like all other writing/updating processes are being stalled when\nthe pending list is flushed, but I am not sure about the specifice here.\n\nOur solution is to turn \"fastupdate\" off for our gin-indices.\nhttp://www.postgresql.org/docs/9.0/static/sql-createindex.html\nCan also be set with ALTER TABLE ALTER INDEX\n\nI would have preferred a \"backend local\" batch-update process so it\ncould batch up everything from its own transaction instead of interferring\nwith other transactions.\n\nI would say, that we came from Xapian and the PG-index is a way better\nfit for our application. The benefits of having the fts index next to \nall the\nother data saves a significant amount of development time in the application\nboth in terms of development and maintaince. (simpler, easier and more \nmanageble).\n\n-- \nJesper\n\n\n\n\n\n\n\n On 13/03/12 06:43, Rural Hunter wrote:\n> I tried to increase work_mem\n but the inserts hang more time each time\n > with less frequency. So it makes almost no difference for the\n total\n > hanging time. Frequent vacuum is not a choice since the hang\n happens\n > very 3-5 mins. is there any improvement I can make with pg\n for such\n > data volumn(still increasing) or it's time to turn to other\n full text\n > search solution such as lucene etc?\n\n We're using gin for fts-search, current index-size is up to 149GB\n and yes\n the update process is quite tough on the disk-io-subsystem. \n\n What you're experiencing is filling of the fastupdate queue, thats\n being \n flushed. Setting wok_mem higher causes the system to stall for\n longer\n period less frequent and has a side cost on queries that need to go\n through\n the pending list (that is bigger) in addition to the index-search.\n To me\n it seems like all other writing/updating processes are being stalled\n when \n the pending list is flushed, but I am not sure about the specifice\n here. \n\n Our solution is to turn \"fastupdate\" off for our gin-indices. \nhttp://www.postgresql.org/docs/9.0/static/sql-createindex.html\n Can also be set with ALTER TABLE ALTER INDEX\n\n I would have preferred a \"backend local\" batch-update process so it\n \n could batch up everything from its own transaction instead of\n interferring\n with other transactions. \n\n I would say, that we came from Xapian and the PG-index is a way\n better\n fit for our application. The benefits of having the fts index next\n to all the\n other data saves a significant amount of development time in the\n application\n both in terms of development and maintaince. (simpler, easier and\n more manageble). \n\n -- \n Jesper",
"msg_date": "Tue, 13 Mar 2012 07:29:57 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Gin index insert performance issue"
},
{
"msg_contents": "Thanks for the reply. Your index is much larger than mine..so I see some \nlight. :)\n\n锟斤拷 2012/3/13 14:29, Jesper Krogh 写锟斤拷:\n\n> Our solution is to turn \"fastupdate\" off for our gin-indices.\n> http://www.postgresql.org/docs/9.0/static/sql-createindex.html\n> Can also be set with ALTER TABLE ALTER INDEX\nI will check and try that.\n>\n> I would have preferred a \"backend local\" batch-update process so it\n> could batch up everything from its own transaction instead of interferring\n> with other transactions.\nhave you tested if there is any performance boot for backend \nbatch-update comparing the real time updates?\n>\n> I would say, that we came from Xapian and the PG-index is a way better\n> fit for our application. The benefits of having the fts index next to \n> all the\n> other data saves a significant amount of development time in the \n> application\n> both in terms of development and maintaince. (simpler, easier and more \n> manageble).\nYes, that's why I'm still looking for the improvment inside pg. This is \nreally a big dev/maint saver.\n>\n> -- \n> Jesper\n>\n\n",
"msg_date": "Tue, 13 Mar 2012 15:52:47 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gin index insert performance issue"
},
{
"msg_contents": "I disabled fastupdate on the gin index. looks it solved my problem, at \nleast for now. Thanks a lot for your help Jesper!\n------------------------------\nThanks for the reply. Your index is much larger than mine..so I see some \nlight. :)\n\n锟斤拷 2012/3/13 14:29, Jesper Krogh 写锟斤拷:\n\n> Our solution is to turn \"fastupdate\" off for our gin-indices.\n> http://www.postgresql.org/docs/9.0/static/sql-createindex.html\n> Can also be set with ALTER TABLE ALTER INDEX\nI will check and try that.\n>\n> I would have preferred a \"backend local\" batch-update process so it\n> could batch up everything from its own transaction instead of interferring\n> with other transactions.\nhave you tested if there is any performance boot for backend \nbatch-update comparing the real time updates?\n>\n> I would say, that we came from Xapian and the PG-index is a way better\n> fit for our application. The benefits of having the fts index next to\n> all the\n> other data saves a significant amount of development time in the\n> application\n> both in terms of development and maintaince. (simpler, easier and more\n> manageble).\nYes, that's why I'm still looking for the improvment inside pg. This is \nreally a big dev/maint saver.\n>\n> --\n> Jesper\n>\n\n",
"msg_date": "Thu, 15 Mar 2012 11:49:13 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Gin index insert performance issue"
}
] |
[
{
"msg_contents": "(from #postgresql IRC on freenode)\n\ndarkblue_b I did an interesting experiment the other day davidfetter_vmw .. davidfetter_vmw do tell \ndarkblue_b well you know I do these huge monolithic postGIS queries on an otherwise idle linux machine.. and there was a persistant thought in my head that Postgresql+PostGIS did not make good use of memory allocation >2G\n\ndarkblue_b so I had this long, python driven analysis.. 15 steps.. some, unusual for me, are multiple queries running at once on the same data ... and others are just one labor intensive thing then the next \n (one result table is 1.8M rows for 745M on disk, others are smaller)\n\ndarkblue_b I finally got the kinks worked out.. so I ran it twice.. 4.5 hours on our hardware.. once with shared_buffers set to 2400M and the second time with shared_buffers set to 18000M \n\ndarkblue_b work_mem was unchanged at 640M and.. the run times were within seconds of each other.. no improvement, no penalty \n\ndarkblue_b I have been wondering about this for the last two years \n\ndavidfetter_vmw darkblue_b, have you gone over any of this on -performance or -hackers? darkblue_b no - though I think I should start a blog .. I have a couple of things like this now darkblue_b good story though eh ? \n\n davidfetter_vmw darkblue_b, um, it's a story that hasn't really gotten started until you've gotten some feedback from -performance darkblue_b ok - true... \n\ndarkblue_b pg 9.1 PostGIS 1.5.3 Ubuntu Server Oneiric 64bit Dual Xeons \none Western Digital black label for pg_default; one 3-disk RAID 5 for the database tablespace\n\n==\nBrian Hamlin\nGeoCal\nOSGeo California Chapter\n415-717-4462 cell\n\n",
"msg_date": "Wed, 14 Mar 2012 21:29:24 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Shared memory for large PostGIS operations"
},
{
"msg_contents": "\n\nSo let me clean that up for you:\n\n > On 3/14/2012 11:29 PM, [email protected] wrote:\n\nHello list, my name is Brian Hamlin, but I prefer to go by darkblue, its \nmysterious and dangerous!\n\nI run PG 9.1, PostGIS 1.5.3, Linux 64 on Dual Xeons, OS on a single \ndrive, and db is on 3-disk raid 5. I'm the only user.\n\nwork_mem = 640M\n\nI do these huge monolithic postGIS queries on an otherwise idle linux \nmachine. python driven analysis.. 15 steps.. some, unusual for me, are \nmultiple queries running at once on the same data ... and others are \njust one labor intensive thing then the next (one result table is 1.8M \nrows for 745M on disk, others are smaller)\n\nI tried shared_buffers at both 2400M and 18000M, and it took 4.5 hours \nboth times. I dont know if I am CPU bound or IO bound, but since giving \nPG more ram didnt help much, I'll assume I'm CPU bound. I heard of this \nprogram called vmstat that I'll read up on and post some results for.\n\nI don't know how much memory my box has, and I've never run explain \nanalyze, but I'll try it out and post some. I just learned about \nhttp://explain.depesz.com/ and figure it might help me.\n\nThis is the best list ever! Thanks all! (especially that poetic Dave \nFetter, and that somewhat mean, but helpful, Andy Colson)\n\nShout outs to my friends Garlynn, Nick and Rush (best band ever!). \nParty, my house, next week!\n\n> ==\n> (Virtually) Brian Hamlin\n> GeoCal\n> OSGeo California Chapter\n> 415-717-4462 cell\n\n\n-Andy\n",
"msg_date": "Fri, 16 Mar 2012 08:55:39 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for large PostGIS operations"
},
{
"msg_contents": "Andy Colson <[email protected]> wrote:\n \n> I tried shared_buffers at both 2400M and 18000M, and it took 4.5\n> hours both times. I dont know if I am CPU bound or IO bound, but\n> since giving PG more ram didnt help much, I'll assume I'm CPU\n> bound.\n \nAll else being the same, adjusting shared_buffers affects how much\nof your cache is managed by PostgreSQL and how much of your cache is\nmanaged by the OS; it doesn't exactly change how much you have\ncached or necessarily affect disk waits. (There's a lot more that\ncan be said about the fine points of this, but you don't seem to\nhave sorted out the big picture yet.)\n \n> I heard of this program called vmstat that I'll read up on and\n> post some results for.\n \nThat's a good way to get a handle on whether your bottleneck is\ncurrently CPU or disk access.\n \n> I don't know how much memory my box has\n \nThat's pretty basic information when it comes to tuning. What does\n`free -m` show? (Copy/paste is a good thing.)\n \n> and I've never run explain analyze\n \nIf you're looking to make things faster (a fact not yet exactly in\nevidence), you might want to start with the query which runs the\nlongest, or perhaps the one which most surprises you with its run\ntime, and get the EXPLAIN ANALYZE output for that query. There is\nother information you should include; this page should help:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> I just learned about http://explain.depesz.com/ and figure it\n> might help me.\n \nIt is a nice way to present EXPLAIN ANALYZE output from complex\nqueries.\n \n-Kevin\n",
"msg_date": "Fri, 16 Mar 2012 09:17:31 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for large PostGIS operations"
},
{
"msg_contents": "Hi Kevin, List, others...\n\nOn Mar 16, 2012, at 7:17 AM, Kevin Grittner wrote:\n\n> Andy Colson <[email protected]> wrote:\n>\n>> I tried shared_buffers at both 2400M and 18000M, and it took 4.5\n>> hours both times. ... (weak attempts at humor omitted) ....\n>\n> All else being the same, adjusting shared_buffers affects how much\n> of your cache is managed by PostgreSQL and how much of your cache is\n> managed by the OS; it doesn't exactly change how much you have\n> cached or necessarily affect disk waits. (There's a lot more that\n> can be said about the fine points of this, but you don't seem to\n> have sorted out the big picture yet.)\n\n Linux caching is aggressive already.. so I think this example \npoints out that\nPostgres caching is not contributing here.. thats why I posted this \nshort\nexample to this list.. I thought ti was a useful data point.. that \nit might be\nuseful to others... and to the PostgreSQL project devs...\n\n>\n>> I heard of this program called vmstat that I'll read up on and\n>> post some results for. -----ignore- I dont take advice with \n>> vinegar well...\n>\n> That's a good way to get a handle on whether your bottleneck is\n> currently CPU or disk access.\n>\n>> (attempted insults omitted)\n>\n> If you're looking to make things faster (a fact not yet exactly in\n> evidence), you might want to start with the query which runs the\n> longest, or perhaps the one which most surprises you with its run\n> time, and get the EXPLAIN ANALYZE output for that query. There is\n> other information you should include; this page should help:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n\n some of the queries have been gone over fairly well, other not..\nIts a complex sequence and we are in production mode here,\nso I dont get a chance to do everything I might do with regard to\none particular query...\n\n\n>\n>> I just learned about http://explain.depesz.com/ and figure it\n>> might help me.\n>\n> It is a nice way to present EXPLAIN ANALYZE output from complex\n> queries.\n\n\n explain.depesz.com definitely a good reference, thank you for that..\n\n\n==\nBrian Hamlin\nGeoCal\nOSGeo California Chapter\n415-717-4462 cell\n\n\n\n",
"msg_date": "Fri, 16 Mar 2012 14:00:11 -0700",
"msg_from": "Brian Hamlin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for large PostGIS operations"
},
{
"msg_contents": "Brian Hamlin <[email protected]> wrote:\n> On Mar 16, 2012, at 7:17 AM, Kevin Grittner wrote:\n>> Andy Colson <[email protected]> wrote:\n>>\n>>> I tried shared_buffers at both 2400M and 18000M, and it took 4.5\n>>> hours both times. ... (weak attempts at humor omitted) ....\n \nAh, I didn't pick up on the attempts at humor; perhaps that's why\nyou mistook something I said as an attempt at an insult. We get\nposts here from people at all different levels of experience, and\nmany people are grateful for pointers on what various utilities can\ndo for them or how best to formulate a post so they can get help\nwhen they need it. Attempts to help don't constitute insults, even\nif the need is feigned.\n \n>> All else being the same, adjusting shared_buffers affects how\n>> much of your cache is managed by PostgreSQL and how much of your\n>> cache is managed by the OS; it doesn't exactly change how much\n>> you have cached or necessarily affect disk waits.\n \n> Linux caching is aggressive already.. so I think this example \n> points out that Postgres caching is not contributing here.. thats\n> why I posted this short example to this list.. I thought ti was a\n> useful data point.. that it might be useful to others... and to\n> the PostgreSQL project devs...\n \nYeah, guidelines for shared_buffers in the docs are vague because\nthe best setting varies so much with the workload. While the docs\nhint at setting it at 25% of the computer's RAM, most benchmarks\nposted on this list have found throughput to peak at around 8GB to\n10GB on system where 25% would be more than that. (Far less on\nWindows, as the docs mention.)\n \nThere can be a point well before that where there are latency\nspikes. In our shop we have a multi-TB database backing a web site,\nand to prevent these latency spikes we keep shared_buffers down to\n2GB even though the system has 128GB RAM. Forcing dirty pages out\nto the OS cache helps them to be written in a more timely manner by\ncode which knows something about the hardware and what order of\nwrites will be most efficient. PostgreSQL has, as a matter of a\ndesign choice, decided to leave a lot to the OS caching, file\nsystems, and device drivers, and a key part of tuning is to discover\nwhat balance of that versus the DBMS caching performs best for your\nworkload.\n \n> some of the queries have been gone over fairly well, other not..\n> Its a complex sequence and we are in production mode here,\n> so I dont get a chance to do everything I might do with regard to\n> one particular query...\n \nYou may want to take a look at auto_explain:\n \nhttp://www.postgresql.org/docs/current/interactive/auto-explain.html\n \nSince you're already in production it may be hard to test the\nperformance of your disk system, but it's a pretty safe bet that if\nyou are at all disk-bound you would benefit greatly from adding one\nmore drive and converting your 3 drive RAID 5 to a 4 drive RAID 10,\npreferably with a RAID controller with BBU cache configured for\nwrite-back.\n \n-Kevin\n",
"msg_date": "Fri, 16 Mar 2012 17:30:06 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for large PostGIS operations"
},
{
"msg_contents": "On 03/16/2012 05:30 PM, Kevin Grittner wrote:\n> Brian Hamlin<[email protected]> wrote:\n>> On Mar 16, 2012, at 7:17 AM, Kevin Grittner wrote:\n>>> Andy Colson<[email protected]> wrote:\n>>>\n>>>> I tried shared_buffers at both 2400M and 18000M, and it took 4.5\n>>>> hours both times. ... (weak attempts at humor omitted) ....\n>\n> Ah, I didn't pick up on the attempts at humor; perhaps that's why\n> you mistook something I said as an attempt at an insult.\n\nIt wasn't you Kevin, it was me that insulted him. (Although I was trying to be funny, and not mean).\n\nSorry again Brian.\n\n-Andy\n",
"msg_date": "Sat, 17 Mar 2012 08:20:54 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory for large PostGIS operations"
}
] |
[
{
"msg_contents": "I have a table with serveral million records. they are divided into \nabout one hundred catagory(column cid). I created index includes the cid \nas the first column. I had a problem with some cids they only have few \nrecords comparing with other cids. Some of them only have serveral \nthousand rows. Some queries are not using index on the cids. I got the \nexplain for the queries.\nNote:\narticle_others_cid_time_style_idx is the index contains cid as the first \ncolumn\narticle_others_pkey is the primary key on an auto incremented column aid.\n\n# select count(*) from article_others;\n count\n---------\n 6888459\n(1 row)\n\n# select count(*) from article_others where cid=74;\n count\n-------\n 4199\n(1 row)\n\n1. # explain select count(*) from article_others where cid=74;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=32941.95..32941.96 rows=1 width=0)\n -> Index Scan using article_others_cid_time_style_idx on \narticle_others (cost=0.00..32909.34 rows=13047 width=0)\n Index Cond: (cid = 74)\n(3 rows)\n\n2. # explain select aid from article_others where cid=74 limit 10;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..25.22 rows=10 width=8)\n -> Index Scan using article_others_cid_time_style_idx on \narticle_others (cost=0.00..32909.34 rows=13047 width=8)\n Index Cond: (cid = 74)\n(3 rows)\n\n3. # explain select aid from article_others where cid=74 order by aid \ndesc limit 10;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1034.00 rows=10 width=8)\n -> Index Scan Backward using article_others_pkey on article_others \n(cost=0.00..1349056.65 rows=13047 width=8)\n Filter: (cid = 74)\n(3 rows)\n\n4. # explain select aid from article_others where cid=74 order by aid \ndesc limit 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..103.40 rows=1 width=8)\n -> Index Scan Backward using article_others_pkey on article_others \n(cost=0.00..1349060.65 rows=13047 width=8)\n Filter: (cid = 74)\n(3 rows)\n\n5. # explain select max(aid) from article_others where cid=74;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Result (cost=104.70..104.71 rows=1 width=0)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..104.70 rows=1 width=8)\n -> Index Scan Backward using article_others_pkey on \narticle_others (cost=0.00..1365988.55 rows=13047 width=8)\n Index Cond: (aid IS NOT NULL)\n Filter: (cid = 74)\n(6 rows)\n\nNow the query 3-5 using article_others_pkey are quite slow. The rows for \ncid 74 are very old and seldom get updated. I think pg needs to scan \nquite a lot on article_others_pkey before it gets the rows for cid 74. \nThe same query for other cids with new and majority of rows runs very \nfast. for example:\n# explain select max(aid) from article_others where cid=258;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Result (cost=1.54..1.55 rows=1 width=0)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..1.54 rows=1 width=8)\n -> Index Scan Backward using article_others_pkey on \narticle_others (cost=0.00..1366260.55 rows=889520 width=8)\n Index Cond: (aid IS NOT NULL)\n Filter: (cid = 258)\n\nSo I think if pg chooses to use index article_others_cid_time_style_idx \nthe performance would be much better. or any other solution I can take \nto improve the query performance for those cids like 74?\nAnother question, why the plan shows rows=13047 for cid=74 while \nactually it only has 4199 rows? There is almost no data changes for cid \n74 and I just vacuum/analyzed the table this morning.\n",
"msg_date": "Thu, 15 Mar 2012 16:32:09 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "index choosing problem"
},
{
"msg_contents": "2012/3/15 Rural Hunter <[email protected]>:\n> Now the query 3-5 using article_others_pkey are quite slow. The rows for cid\n> 74 are very old and seldom get updated. I think pg needs to scan quite a lot\n> on article_others_pkey before it gets the rows for cid 74. The same query\n> for other cids with new and majority of rows runs very fast.\n\nThis is because the PostgreSQL cost model doesn't know about the\ncorrelation between aid and cid. In absence of information it assumes\nthat it will find a row with cid=74 about every 68 rows\n(889520/13047).\n\nOne option to fix this case is to use OFFSET 0 as an optimization barrier:\nSELECT max(aid) FROM\n (SELECT aid FROM article_others WHERE cid=74 OFFSET 0) AS x;\n\nThat has the unfortunate effect of performing badly for cid's that are\nextremely popular. That may or may not be acceptable in your case.\n\nTo fix this properly the query optimizer needs to know the relationship between\naid and cid and needs to know how to apply that to estimating the cost\nof index scans. A prerequisite for implementing this is to have\nmulti-column statistics. To do the estimation, the current linear cost\nmodel needs to be changed to something that can express a non-linear\nrelationship between tuples returned and cost, e.g. a piece-wise\nlinear model. The stats collection part is actually feasible, in fact\nI'm currently working on a patch for that. As for the estimation\nimprovement, I have an idea how it might work, but I'm not really sure\nyet if the performance hit for query planning would be acceptable.\n\n> Another question, why the plan shows rows=13047 for cid=74 while actually it\n> only has 4199 rows? There is almost no data changes for cid 74 and I just\n> vacuum/analyzed the table this morning.\n\nMight just be an artifact of random sampling. Try raising your stats\ntarget and re-analyzing to confirm.\n\nAll the best,\nAnts Aasma\n",
"msg_date": "Thu, 15 Mar 2012 13:44:31 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index choosing problem"
}
] |
[
{
"msg_contents": "Hi folks;\n\nI am trying to continue profiling which in turn feeds query and index\ntuning changes for the AKCS-WWW forum software, and appear to have no\ngood way to do what I need to do -- or I've missed something obvious.\n\nThe application uses the libpq interface from \"C\" to talk to Postgres\nwhich contains all the back end data. Since this is a forum application\nit is very read-heavy (other than accounting and of course user posting\nfunctionality), and is template-driven. All of the table lookup\nfunctions that come from the display templates are compartmentalized in\none function in the base code.\n\nWhat I want to be able to do is to determine the resource usage by\nPostgres for each of these calls.\n\nI can do this by adding a call into the function just before the \"real\"\ncall to PQexec() that prepends \"explain analyze\" to the call, makes a\npreamble call to PQexec() then grabs the last tuple returned which is\nthe total execution time (with some text), parse that and there is the\ntotal time anyway. But I see no way to get stats on I/O (e.g. Postgres\nbuffer hits and misses, calls to the I/O operating system level APIs, etc.)\n\nBut while I can get the numbers this way it comes at the expense of\ndoubling the Postgres processing. There does not appear, however, to be\nany exposition of the processing time requirements for actual (as\nopposed to \"modeled\" via explain analyze) execution of queries -- at\nleast not via the libpq interface.\n\nAm I missing something here -- is there a way to get resource\nconsumption from actual queries as they're run? What I'm doing right\nnow is the above, with a configuration switch that has a minimum\nreportable execution time and then logging the returns that exceed that\ntime, logging the queries that have the above-threshold runtimes for\nanalysis and attempted optimization. This works but obviously is\nsomething one only does for profiling as it doubles database load and is\nundesirable in ordinary operation. What I'd like to be able to do is\nhave the code track performance all the time and raise alerts when it\nsees \"outliers\" giving me a continually-improving set of targets for\nreduction of resource consumption (up until I reach the point where I\ndon't seem to be able to make it any faster of course :-))\n\nThanks in advance!\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n Hi folks;\n\n I am trying to continue profiling which in turn feeds query and\n index tuning changes for the AKCS-WWW forum software, and appear to\n have no good way to do what I need to do -- or I've missed something\n obvious.\n\n The application uses the libpq interface from \"C\" to talk to\n Postgres which contains all the back end data. Since this is a\n forum application it is very read-heavy (other than accounting and\n of course user posting functionality), and is template-driven. All\n of the table lookup functions that come from the display templates\n are compartmentalized in one function in the base code.\n\n What I want to be able to do is to determine the resource usage by\n Postgres for each of these calls.\n\n I can do this by adding a call into the function just before the\n \"real\" call to PQexec() that prepends \"explain analyze\" to the call,\n makes a preamble call to PQexec() then grabs the last tuple returned\n which is the total execution time (with some text), parse that and\n there is the total time anyway. But I see no way to get stats on\n I/O (e.g. Postgres buffer hits and misses, calls to the I/O\n operating system level APIs, etc.)\n\n But while I can get the numbers this way it comes at the expense of\n doubling the Postgres processing. There does not appear, however,\n to be any exposition of the processing time requirements for actual\n (as opposed to \"modeled\" via explain analyze) execution of queries\n -- at least not via the libpq interface.\n\n Am I missing something here -- is there a way to get resource\n consumption from actual queries as they're run? What I'm doing\n right now is the above, with a configuration switch that has a\n minimum reportable execution time and then logging the returns that\n exceed that time, logging the queries that have the above-threshold\n runtimes for analysis and attempted optimization. This works but\n obviously is something one only does for profiling as it doubles\n database load and is undesirable in ordinary operation. What I'd\n like to be able to do is have the code track performance all the\n time and raise alerts when it sees \"outliers\" giving me a\n continually-improving set of targets for reduction of resource\n consumption (up until I reach the point where I don't seem to be\n able to make it any faster of course :-))\n\n Thanks in advance!\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Fri, 16 Mar 2012 09:31:57 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Obtaining resource usage statistics from execution? (v 9.1)"
},
{
"msg_contents": "On Fri, Mar 16, 2012 at 4:31 PM, Karl Denninger <[email protected]> wrote:\n> What I'd\n> like to be able to do is have the code track performance all the time and\n> raise alerts when it sees \"outliers\" giving me a continually-improving set\n> of targets for reduction of resource consumption (up until I reach the point\n> where I don't seem to be able to make it any faster of course :-))\n\nSounds almost exactly like what the auto_explain contrib module is\ndesigned to do:\nhttp://www.postgresql.org/docs/9.1/static/auto-explain.html\n\nIt can run with reasonably low overhead if your system has fast\ntiming. You can check the timing performance of your system with the\ntool attached here:\nhttp://archives.postgresql.org/message-id/4F15B930.50108%402ndQuadrant.com\n\nAnything under 200ns should be ok.\n\nCheers,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Fri, 16 Mar 2012 16:48:25 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Obtaining resource usage statistics from execution? (v 9.1)"
},
{
"msg_contents": "Karl Denninger <[email protected]> writes:\n> I am trying to continue profiling which in turn feeds query and index\n> tuning changes for the AKCS-WWW forum software, and appear to have no\n> good way to do what I need to do -- or I've missed something obvious.\n\nDo you really want the application involved in this? Usually, when\npeople try to profile a workload, they want everything to just get\nlogged on the server side. There are some contrib modules to help\nwith that approach --- see auto_explain and pg_stat_statements.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Mar 2012 10:53:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Obtaining resource usage statistics from execution? (v 9.1) "
},
{
"msg_contents": "Hi,\n\nfirst of all, which PostgreSQL version are you using, what platform is it\nrunning on? What level of control do you have over the database (are you\njust a user or can you modify the postgresql.conf file)?\n\nOn 16 Březen 2012, 15:31, Karl Denninger wrote:\n> Hi folks;\n>\n> I am trying to continue profiling which in turn feeds query and index\n> tuning changes for the AKCS-WWW forum software, and appear to have no\n> good way to do what I need to do -- or I've missed something obvious.\n\nWhy do you need to do that? Have you checked log_duration /\nlog_min_duration_statement configuration options? What about auto_explain\nand maybe pg_stat_statements?\n\nThe aggregated data (e.g. provided by pg_stat_statements or pgfounie) are\nIMHO much more useful than having to deal with data collected for each\nquery separately.\n\n> The application uses the libpq interface from \"C\" to talk to Postgres\n> which contains all the back end data. Since this is a forum application\n> it is very read-heavy (other than accounting and of course user posting\n> functionality), and is template-driven. All of the table lookup\n> functions that come from the display templates are compartmentalized in\n> one function in the base code.\n>\n> What I want to be able to do is to determine the resource usage by\n> Postgres for each of these calls.\n>\n> I can do this by adding a call into the function just before the \"real\"\n> call to PQexec() that prepends \"explain analyze\" to the call, makes a\n> preamble call to PQexec() then grabs the last tuple returned which is\n> the total execution time (with some text), parse that and there is the\n> total time anyway. But I see no way to get stats on I/O (e.g. Postgres\n> buffer hits and misses, calls to the I/O operating system level APIs,\n> etc.)\n>\n> But while I can get the numbers this way it comes at the expense of\n> doubling the Postgres processing. There does not appear, however, to be\n> any exposition of the processing time requirements for actual (as\n> opposed to \"modeled\" via explain analyze) execution of queries -- at\n> least not via the libpq interface.\n\nYup, that's the problem of EXPLAIN ANALYZE. IMHO it's a 'no go' in this\ncase I guess. Not only you have to run the query twice, but it may also\nsignificantly influence the actual runtime due to gettimeofday overhead\netc.\n\nYou can use auto_explain to eliminate the need to run the query twice, but\nthe overhead may still be a significant drag, not reflecting the actual\nperformance (and thus not useful to perform reasonable profiling).\n\n> Am I missing something here -- is there a way to get resource\n> consumption from actual queries as they're run? What I'm doing right\n> now is the above, with a configuration switch that has a minimum\n> reportable execution time and then logging the returns that exceed that\n> time, logging the queries that have the above-threshold runtimes for\n> analysis and attempted optimization. This works but obviously is\n> something one only does for profiling as it doubles database load and is\n> undesirable in ordinary operation. What I'd like to be able to do is\n> have the code track performance all the time and raise alerts when it\n> sees \"outliers\" giving me a continually-improving set of targets for\n> reduction of resource consumption (up until I reach the point where I\n> don't seem to be able to make it any faster of course :-))\n\nIf all you want is outliers, then set log_min_duration_statement and use\npgfounie to process the logs. That's very simple and very effective way to\ndeal with them.\n\nIf you really need the resource consumption stats, you may write a simple\nSRF that calls getrusage and returns the data as a row so that you'll be\nable to do something like\n\n select * from pg_rusage()\n\nThis seems like a neat idea, and writing an extension that should be\nfairly simple. Still, it will be a Linux-only (because getrusage is) and\nI'm not quite sure the collected data are very useful.\n\nTomas\n\n\n\n",
"msg_date": "Fri, 16 Mar 2012 16:08:25 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Obtaining resource usage statistics from execution? (v\n 9.1)"
},
{
"msg_contents": "On 3/16/2012 9:53 AM, Tom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n>> I am trying to continue profiling which in turn feeds query and index\n>> tuning changes for the AKCS-WWW forum software, and appear to have no\n>> good way to do what I need to do -- or I've missed something obvious.\n> Do you really want the application involved in this? Usually, when\n> people try to profile a workload, they want everything to just get\n> logged on the server side. There are some contrib modules to help\n> with that approach --- see auto_explain and pg_stat_statements.\n>\n> \t\t\tregards, tom lane\nWell, I do in some cases, yes. Specifically I'd like to flag instances\n\"on the fly\" when total execution time in the calls exceed some amount. \nI can (of course) track the time in the application but what I'd like to\nbe able to do is track the time that a particular template uses at the\nmillisecond level in database calls.\n\npg_stat_statements is generalized across the server which is useful for\nprofiling on an aggregate basis but not so much among individual uses. \nauto_explain doesn't tell me what upstream call generated the log; I\nhave to go back and figure that out by hand and it's in the server logs\nwhich isn't really where I want it.\n\nOne of the things that doing this in the application would allow is that\nthe queries in question are all generated by a template file that is\nbasically HTML with carefully-crafted commentish-appearing things that\nthe parser uses; for instance:\n\n<!--&number--> (push previously-obtained table value for \"number\" in\nthe current row into the selector)\n<!--*post--> (look up the post using the selector that was just pushed)\n<!--@message--> (replace the comment text with the field \"message\" from\nthe returned lookup)\n<!--^--> (loop back / recursion level closure)\n<!--$--> (end of loop / recursion level)\n\nThat's a simple example; in the real template for each of those messages\nthere are other things pushed off and other tables referenced, such as\nthe user's statistics, their avatar, recursion levels can be tagged in\nclosing statements, etc. I'd like to be able to, in the code that\nexecutes that lookup (when \"post\" is parsed) grab the time required to\nexecute that SQL statement and stash it, possibly into another table\nwith the tag (\"post\") so I know what query class generated it, and then\nat the end of the template itself pick up the aggregate for the entire\ntemplate's database processing. If I can pull the time for each query\ngenerated by the template in the code adding it all up is trivial for\nthe template as a whole.\n\nThis would allow me to profile the total execution time for each of the\ntemplates in the system at both the template level and the individual\nfunction call level out of the template. If I then stash that data\nsomewhere (e.g. into a different table) I can then look for the\nworst-performing queries and templates and focus my attention on the\nones that get run the most so my \"bang for the buck in optimization\ntime\" is improved.\n\nMost of these queries are very standard but not all. One of the deviant\nexamples of the latter is the \"search\" function exposed to the user that\ncan craft a rather-complex query in response to what the user fills in\non the form that is submitted targeting both simple field lookups and\nGiST full-text indices.\n\nDoing this the hard way has returned huge dividends as I found a query\nthat was previously running in the ~2,000 ms range and is executed very\nfrequently that responded extraordinarily well to a compound index\naddition on one of the target tables. That in turn dropped its\nexecution time to double-digit milliseconds (!!) for most cases and the\nuser experience improvement from that change was massive. I don't\nexpect to find many more \"wins\" of that caliber but I'll take as many as\nI can get :-)\n\nThe argument for this in the application comes down to the granularity\nand specificity that I gain from knowing exactly what function call in\nthe template generated the timing being reported.\n\n(If it matters, Postgres 9.1.3 on FreeBSD 8.x)\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n On 3/16/2012 9:53 AM, Tom Lane wrote:\n \nKarl Denninger <[email protected]> writes:\n\n\nI am trying to continue profiling which in turn feeds query and index\ntuning changes for the AKCS-WWW forum software, and appear to have no\ngood way to do what I need to do -- or I've missed something obvious.\n\n\n\nDo you really want the application involved in this? Usually, when\npeople try to profile a workload, they want everything to just get\nlogged on the server side. There are some contrib modules to help\nwith that approach --- see auto_explain and pg_stat_statements.\n\n\t\t\tregards, tom lane\n\n Well, I do in some cases, yes. Specifically I'd like to flag\n instances \"on the fly\" when total execution time in the calls exceed\n some amount. I can (of course) track the time in the application\n but what I'd like to be able to do is track the time that a\n particular template uses at the millisecond level in database calls.\n\n pg_stat_statements is generalized across the server which is useful\n for profiling on an aggregate basis but not so much among individual\n uses. auto_explain doesn't tell me what upstream call generated the\n log; I have to go back and figure that out by hand and it's in the\n server logs which isn't really where I want it.\n\n One of the things that doing this in the application would allow is\n that the queries in question are all generated by a template file\n that is basically HTML with carefully-crafted commentish-appearing\n things that the parser uses; for instance:\n\n <!--&number--> (push previously-obtained table value for\n \"number\" in the current row into the selector)\n <!--*post--> (look up the post using the selector that was\n just pushed)\n <!--@message--> (replace the comment text with the field\n \"message\" from the returned lookup)\n <!--^--> (loop back / recursion level closure)\n <!--$--> (end of loop / recursion level)\n\n That's a simple example; in the real template for each of those\n messages there are other things pushed off and other tables\n referenced, such as the user's statistics, their avatar, recursion\n levels can be tagged in closing statements, etc. I'd like to be\n able to, in the code that executes that lookup (when \"post\" is\n parsed) grab the time required to execute that SQL statement and\n stash it, possibly into another table with the tag (\"post\") so I\n know what query class generated it, and then at the end of the\n template itself pick up the aggregate for the entire template's\n database processing. If I can pull the time for each query\n generated by the template in the code adding it all up is trivial\n for the template as a whole.\n\n This would allow me to profile the total execution time for each of\n the templates in the system at both the template level and the\n individual function call level out of the template. If I then stash\n that data somewhere (e.g. into a different table) I can then look\n for the worst-performing queries and templates and focus my\n attention on the ones that get run the most so my \"bang for the buck\n in optimization time\" is improved.\n\n Most of these queries are very standard but not all. One of the\n deviant examples of the latter is the \"search\" function exposed to\n the user that can craft a rather-complex query in response to what\n the user fills in on the form that is submitted targeting both\n simple field lookups and GiST full-text indices.\n\n Doing this the hard way has returned huge dividends as I found a\n query that was previously running in the ~2,000 ms range and is\n executed very frequently that responded extraordinarily well to a\n compound index addition on one of the target tables. That in turn\n dropped its execution time to double-digit milliseconds (!!) for\n most cases and the user experience improvement from that change was\n massive. I don't expect to find many more \"wins\" of that caliber\n but I'll take as many as I can get :-)\n\n The argument for this in the application comes down to the\n granularity and specificity that I gain from knowing exactly what\n function call in the template generated the timing being reported.\n\n (If it matters, Postgres 9.1.3 on FreeBSD 8.x)\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Fri, 16 Mar 2012 10:38:32 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Obtaining resource usage statistics from execution?\n (v 9.1)"
}
] |
[
{
"msg_contents": "Andy Colson wrote:\nOn 03/16/2012 05:30 PM, Kevin Grittner wrote:\n \n>> Ah, I didn't pick up on the attempts at humor; perhaps that's why\n>> you mistook something I said as an attempt at an insult.\n>\n> It wasn't you Kevin, it was me that insulted him. (Although I was\n> trying to be funny, and not mean).\n \nAdding to the confusion, I think I missed one of the emails/posts. \nOh, well, it sounds like mostly people need to use more smiley-faces,\nsince humor can be so easy to miss in this medium.\n \nBrian, I hope this doesn't put you off from posting -- we try to be\nhelpful here.\n \n-Kevin\n",
"msg_date": "Sat, 17 Mar 2012 11:16:09 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory for large PostGIS operations"
}
] |
[
{
"msg_contents": "Disclaimer: this is a re-post, since I wasn't subscribed the first\ntime I posted. Pardon if this is a duplicate.]\n\nThe following query is abysmally slow (e.g. 7 hours+). The goal is to\nfind \"among the users that follow user #1, who do they also follow?\"\nand to count the latter.\n\n SELECT L2.followed_id as followed_id, COUNT(U1.id) AS count\n FROM users AS U1\nINNER JOIN links AS L1 ON L1.follower_id = U1.id\nINNER JOIN links AS L2 ON L2.follower_id = U1.id\n WHERE U1.type = 'User::Twitter'\n AND L1.followed_id = 1\n GROUP BY L2.followed_id\n\nHere's the rest of the info.\n\n=== versions\n\npsql (9.1.2, server 8.3.14)\n\n=== schema\n\n create_table \"users\", :force => true do |t|\n t.string \"type\"\n end\n\n create_table \"links\", :force => true do |t|\n t.integer \"followed_id\"\n t.integer \"follower_id\"\n end\n\n add_index \"links\", [\"follower_id\"], :name => \"index_links_on_follower_id\"\n add_index \"links\", [\"followed_id\", \"follower_id\"], :name =>\n\"index_links_on_followed_id_and_follower_id\", :unique => true\n add_index \"links\", [\"followed_id\"], :name => \"index_links_on_followed_id\"\n\n=== # of rows\n\nusers: 2,525,539\nlinks: 4,559,390\n\n=== explain\n\n \"HashAggregate (cost=490089.52..490238.78 rows=11941 width=8)\"\n \" -> Hash Join (cost=392604.44..483190.22 rows=1379860 width=8)\"\n \" Hash Cond: (f1.follower_id = u1.id)\"\n \" -> Bitmap Heap Scan on links f1 (cost=14589.95..55597.70\nrows=764540 width=4)\"\n \" Recheck Cond: (followed_id = 1)\"\n \" -> Bitmap Index Scan on index_links_on_followed_id\n(cost=0.00..14398.82 rows=764540 width=0)\"\n \" Index Cond: (followed_id = 1)\"\n \" -> Hash (cost=300976.98..300976.98 rows=4559881 width=12)\"\n \" -> Hash Join (cost=94167.40..300976.98 rows=4559881 width=12)\"\n \" Hash Cond: (f2.follower_id = u1.id)\"\n \" -> Seq Scan on links f2 (cost=0.00..77049.81\nrows=4559881 width=8)\"\n \" -> Hash (cost=53950.20..53950.20 rows=2526496 width=4)\"\n \" -> Seq Scan on users u1\n(cost=0.00..53950.20 rows=2526496 width=4)\"\n \" Filter: ((type)::text =\n'User::Twitter'::text)\"\n\n=== other comments\n\nI'm assuming I'm doing something obviously stupid and that the above\ninfo will be sufficient for anyone skilled in the art to detect the\nproblem. However, if needed I will gladly invest the time\nto create a subset of the data in order to run EXPLAIN ANALYZE. (With\nthe whole dataset, it requires > 7 hours to complete the query. I\ndon't want to go down that path again!)\n\n- rdp\n",
"msg_date": "Sat, 17 Mar 2012 13:56:02 -0700",
"msg_from": "Robert Poor <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow self-join query"
},
{
"msg_contents": "On Sat, Mar 17, 2012 at 2:56 PM, Robert Poor <[email protected]> wrote:\n> Disclaimer: this is a re-post, since I wasn't subscribed the first\n> time I posted. Pardon if this is a duplicate.]\n>\n> The following query is abysmally slow (e.g. 7 hours+). The goal is to\n> find \"among the users that follow user #1, who do they also follow?\"\n> and to count the latter.\n>\n> SELECT L2.followed_id as followed_id, COUNT(U1.id) AS count\n> FROM users AS U1\n> INNER JOIN links AS L1 ON L1.follower_id = U1.id\n> INNER JOIN links AS L2 ON L2.follower_id = U1.id\n> WHERE U1.type = 'User::Twitter'\n> AND L1.followed_id = 1\n> GROUP BY L2.followed_id\n>\n> Here's the rest of the info.\n>\n> === versions\n>\n> psql (9.1.2, server 8.3.14)\n>\n> === schema\n>\n> create_table \"users\", :force => true do |t|\n> t.string \"type\"\n> end\n>\n> create_table \"links\", :force => true do |t|\n> t.integer \"followed_id\"\n> t.integer \"follower_id\"\n> end\n>\n> add_index \"links\", [\"follower_id\"], :name => \"index_links_on_follower_id\"\n> add_index \"links\", [\"followed_id\", \"follower_id\"], :name =>\n> \"index_links_on_followed_id_and_follower_id\", :unique => true\n> add_index \"links\", [\"followed_id\"], :name => \"index_links_on_followed_id\"\n>\n> === # of rows\n>\n> users: 2,525,539\n> links: 4,559,390\n>\n> === explain\n>\n> \"HashAggregate (cost=490089.52..490238.78 rows=11941 width=8)\"\n> \" -> Hash Join (cost=392604.44..483190.22 rows=1379860 width=8)\"\n> \" Hash Cond: (f1.follower_id = u1.id)\"\n> \" -> Bitmap Heap Scan on links f1 (cost=14589.95..55597.70\n> rows=764540 width=4)\"\n> \" Recheck Cond: (followed_id = 1)\"\n> \" -> Bitmap Index Scan on index_links_on_followed_id\n> (cost=0.00..14398.82 rows=764540 width=0)\"\n> \" Index Cond: (followed_id = 1)\"\n> \" -> Hash (cost=300976.98..300976.98 rows=4559881 width=12)\"\n> \" -> Hash Join (cost=94167.40..300976.98 rows=4559881 width=12)\"\n> \" Hash Cond: (f2.follower_id = u1.id)\"\n> \" -> Seq Scan on links f2 (cost=0.00..77049.81\n> rows=4559881 width=8)\"\n> \" -> Hash (cost=53950.20..53950.20 rows=2526496 width=4)\"\n> \" -> Seq Scan on users u1\n> (cost=0.00..53950.20 rows=2526496 width=4)\"\n> \" Filter: ((type)::text =\n> 'User::Twitter'::text)\"\n>\n> === other comments\n>\n> I'm assuming I'm doing something obviously stupid and that the above\n> info will be sufficient for anyone skilled in the art to detect the\n> problem. However, if needed I will gladly invest the time\n> to create a subset of the data in order to run EXPLAIN ANALYZE. (With\n> the whole dataset, it requires > 7 hours to complete the query. I\n> don't want to go down that path again!)\n\nDo you have an index on users.type?\n",
"msg_date": "Sat, 17 Mar 2012 18:57:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "On Sat, Mar 17, 2012 at 23:07, Scott Marlowe <[email protected]>wrote:\n\n> Yeah try setting [work_mem] to something absurd like 500MB and see if the\n> plan changes.\n>\n\nSuweet! Sorting now runs in-memory, and that makes a big difference, even\nwhen groveling over 1M records (under 12 seconds rather than 7 hours).\n Results in\n\n http://explain.depesz.com/s/hNO\n\nOn Sat, Mar 17, 2012 at 23:09, Scott Marlowe <[email protected]>\n wrote:\n\n> Also it looks like you're still not using the index on this:\n>\n> Subquery Scan u1 (cost=0.00..313.55 rows=50 width=4) (actual\n> time=0.030..147.136 rows=10000 loops=1)\n>\n> Filter: ((u1.type)::text = 'User::Twitter'::text)\n>\n> Are you sure you're using an indexable condition?\n>\n\nI know that users.type is indexed -- what would keep that from being\nhonored? FWIW, I believe that all user.type fields are set to\nUser::Twitter, but that will change in the future.\n\nOn Sat, Mar 17, 2012 at 23:12, Scott Marlowe <[email protected]>\n wrote:\n\n>\n> Also also this looks like it's the most expensive operation:\n>\n> Seq Scan on followings f2 (cost=0.00..93523.95 rows=5534395 width=8)\n> (actual time=0.041..19365.834 rows=5535964 loops=1)\n>\n> I'm guessing the f2.follower_id isn't very selective?\n>\n\nNot 100% sure what you mean -- f2.follower_id is very sparse (compared to\nf1.follower_id), but that's the point of this particular query. But since\nupping work_mem makes it run really fast, I'm not overly concerned about\nthis one. Thanks for your help!\n\nOne last thought: I could re-cast this as a subquery / query pair, each\nwith a single join. Am I correct in thinking that could make it really\neasy on the planner (especially if the tables were properly indexed)?\n\nThanks again.\n\n- r\n\nOn Sat, Mar 17, 2012 at 23:07, Scott Marlowe <[email protected]> wrote:\nYeah try setting [work_mem] to something absurd like 500MB and see if the plan changes.\nSuweet! Sorting now runs in-memory, and that makes a big difference, even when groveling over 1M records (under 12 seconds rather than 7 hours). Results in http://explain.depesz.com/s/hNO\nOn Sat, Mar 17, 2012 at 23:09, Scott Marlowe <[email protected]> wrote:\nAlso it looks like you're still not using the index on this:Subquery Scan u1 (cost=0.00..313.55 rows=50 width=4) (actualtime=0.030..147.136 rows=10000 loops=1)\n Filter: ((u1.type)::text = 'User::Twitter'::text)Are you sure you're using an indexable condition?I know that users.type is indexed -- what would keep that from being honored? FWIW, I believe that all user.type fields are set to User::Twitter, but that will change in the future.\nOn Sat, Mar 17, 2012 at 23:12, Scott Marlowe <[email protected]> wrote:\nAlso also this looks like it's the most expensive operation:Seq Scan on followings f2 (cost=0.00..93523.95 rows=5534395 width=8)(actual time=0.041..19365.834 rows=5535964 loops=1)\nI'm guessing the f2.follower_id isn't very selective?Not 100% sure what you mean -- f2.follower_id is very sparse (compared to f1.follower_id), but that's the point of this particular query. But since upping work_mem makes it run really fast, I'm not overly concerned about this one. Thanks for your help!\nOne last thought: I could re-cast this as a subquery / query pair, each with a single join. Am I correct in thinking that could make it really easy on the planner (especially if the tables were properly indexed)?\nThanks again.- r",
"msg_date": "Sun, 18 Mar 2012 07:37:24 -0700",
"msg_from": "Robert Poor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "\n\nOn 03/18/2012 10:37 AM, Robert Poor wrote:\n>\n> On Sat, Mar 17, 2012 at 23:09, Scott Marlowe <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Also it looks like you're still not using the index on this:\n>\n> Subquery Scan u1 (cost=0.00..313.55 rows=50 width=4) (actual\n> time=0.030..147.136 rows=10000 loops=1)\n>\n> Filter: ((u1.type)::text = 'User::Twitter'::text)\n>\n> Are you sure you're using an indexable condition?\n>\n>\n> I know that users.type is indexed -- what would keep that from being \n> honored? FWIW, I believe that all user.type fields are set to \n> User::Twitter, but that will change in the future.\n>\n>\n\nIf all the rows have that value, then using the index would be silly. \nPostgres knows from the stats that ANALYZE calculates whether or not \nusing an index is likely to be more efficient, and avoids doing so in \ncases where it isn't.\n\ncheers\n\nandrew\n",
"msg_date": "Sun, 18 Mar 2012 10:51:21 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "On Sun, Mar 18, 2012 at 8:37 AM, Robert Poor <[email protected]> wrote:\n> On Sat, Mar 17, 2012 at 23:12, Scott\n> Marlowe <[email protected]> wrote:\n>\n>>\n>> Also also this looks like it's the most expensive operation:\n>>\n>> Seq Scan on followings f2 (cost=0.00..93523.95 rows=5534395 width=8)\n>> (actual time=0.041..19365.834 rows=5535964 loops=1)\n>>\n>> I'm guessing the f2.follower_id isn't very selective?\n>\n>\n> Not 100% sure what you mean -- f2.follower_id is very sparse (compared to\n> f1.follower_id), but that's the point of this particular query. But since\n> upping work_mem makes it run really fast, I'm not overly concerned about\n> this one. Thanks for your help!\n\nSelectivity is how selective is a single value is likely to be. So if\nf2.follower_id has 5000 entries and there's only 2 values, it's not\nlikely to be very selective, as most of the table will match one of\ntwo values. If it's 1M rows and 1M distinct follower_ids then it's\nselectivity is 1.0 because one value will get just one row ever.\n\n> One last thought: I could re-cast this as a subquery / query pair, each with\n> a single join. Am I correct in thinking that could make it really easy on\n> the planner (especially if the tables were properly indexed)?\n\nWhy are you joining twice to the parent table? If you're trying to\nrecurse without a with clause, then wouldn't you join the last table\nto the one before it?\n",
"msg_date": "Sun, 18 Mar 2012 09:30:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "also also wik\n\nOn Sun, Mar 18, 2012 at 8:37 AM, Robert Poor <[email protected]> wrote:\n> On Sat, Mar 17, 2012 at 23:07, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> Yeah try setting [work_mem] to something absurd like 500MB and see if the\n>> plan changes.\n>\n>\n> Suweet! Sorting now runs in-memory, and that makes a big difference, even\n> when groveling over 1M records (under 12 seconds rather than 7 hours).\n> Results in\n>\n> http://explain.depesz.com/s/hNO\n\nWell that's better. Test various sizes of work_mem to see what you\nneed, then maybe double it. How many simultaneous connections do you\nhave to this db? Different accounts? Different apps? While it might\nbe worth setting for a user or a db, it might or might not be a good\nthing to set it to something like 512MB world-wide. On servers with\nhundreds to thousands of connections, 16 or 32MB is often all you'd\nwant to set it to, since it's additive across all active sorts in the\ndb. A thousand users suddenly sorting 512MB in memory at once, can\ntake down your db server in seconds.\n\nStill seems like it's doing a lot of work.\n",
"msg_date": "Sun, 18 Mar 2012 09:44:46 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "On Sun, Mar 18, 2012 at 08:30, Scott Marlowe <[email protected]> wrote:\n> Why are you joining twice to the parent table? If you're trying to\n> recurse without a with clause, then wouldn't you join the last table\n> to the one before it?\n\nI'm FAR from being an SQL expert; there's a significant chance that\nI'm not thinking about this right. My intention for this query\n(slightly renamed since the original post):\n\n SELECT F2.leader_id as leader_id, COUNT(U1.id) AS count\n FROM users AS U1\nINNER JOIN user_associations AS F1 ON F1.follower_id = U1.id\nINNER JOIN user_associations AS F2 ON F2.follower_id = U1.id\n WHERE F1.leader_id = 321\n GROUP BY F2.leader_id\n\nis \"among users that follow leader 321, who are the most widely\nfollowed leaders?\", or more formally, find all the users that are\nfollowers of user 321 (via inner join on F1) Of those users, tally up\ntheir leaders so we know which leaders are most popular. Recall that\nthe user_associations table is simply a user-to-user association:\n\n create_table \"user_associations\", :force => true do |t|\n t.integer \"follower_id\"\n t.integer \"leader_id\"\n end\n\nIs there a better way to do this?\n",
"msg_date": "Sun, 18 Mar 2012 20:57:37 -0700",
"msg_from": "Robert Poor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "On Sun, Mar 18, 2012 at 10:57 PM, Robert Poor <[email protected]> wrote:\n> On Sun, Mar 18, 2012 at 08:30, Scott Marlowe <[email protected]> wrote:\n>> Why are you joining twice to the parent table? If you're trying to\n>> recurse without a with clause, then wouldn't you join the last table\n>> to the one before it?\n>\n> I'm FAR from being an SQL expert; there's a significant chance that\n> I'm not thinking about this right. My intention for this query\n> (slightly renamed since the original post):\n>\n> SELECT F2.leader_id as leader_id, COUNT(U1.id) AS count\n> FROM users AS U1\n> INNER JOIN user_associations AS F1 ON F1.follower_id = U1.id\n> INNER JOIN user_associations AS F2 ON F2.follower_id = U1.id\n> WHERE F1.leader_id = 321\n> GROUP BY F2.leader_id\n>\n> is \"among users that follow leader 321, who are the most widely\n> followed leaders?\", or more formally, find all the users that are\n> followers of user 321 (via inner join on F1) Of those users, tally up\n> their leaders so we know which leaders are most popular. Recall that\n> the user_associations table is simply a user-to-user association:\n>\n> create_table \"user_associations\", :force => true do |t|\n> t.integer \"follower_id\"\n> t.integer \"leader_id\"\n> end\n>\n> Is there a better way to do this?\n\nhm. Something does not seem right with your query. You're joining in\nthe same table twice with the same clause:\n\nINNER JOIN user_associations AS F1 ON F1.follower_id = U1.id\nINNER JOIN user_associations AS F2 ON F2.follower_id = U1.id\n\nI think you meant to cascade through the follower back to the leader.\n(maybe not..it's early monday and the coffee hasn't worked it's way\nthrough the fog yet)...\n\nAlso, do you really need to involve the user table? You're counting\nU1.Id which is equivalent to F2.follower_id.\n\ntry this and see what pops out (i may not have the F1/F2 join quite right):\nSELECT F2.leader_id as leader_id, COUNT(*) AS count\n FROM user_associations AS F1\n INNER JOIN user_associations AS F2 ON F1.follower_id = F2.leader_id\n WHERE F1.leader_id = 321\n GROUP BY 1;\n\nmerlin\n",
"msg_date": "Mon, 19 Mar 2012 08:35:14 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "Robert Poor <[email protected]> wrote:\n> among users that follow leader 321, who are the most widely\n> followed leaders?\", or more formally, find all the users that are\n> followers of user 321 (via inner join on F1) Of those users,\n> tally up their leaders so we know which leaders are most popular.\n \nIt sounds like you're looking for something like this:\n \nSELECT leader_id, count(*) as count\n FROM user_associations x\n WHERE exists\n (\n SELECT * FROM user_associations y\n WHERE y.follower_id = x.follower_id\n AND y.leader_id = 321\n )\n GROUP BY leader_id\n;\n \n> create_table \"user_associations\", :force => true do |t|\n> t.integer \"follower_id\"\n> t.integer \"leader_id\"\n> end\n \nI don't really know what that means. In the future, it would make\nthings easier on those who are trying to help if you either post the\nSQL form or go into psql and type `\\d tablename`.\n \n-Kevin\n",
"msg_date": "Mon, 19 Mar 2012 09:27:06 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "@merlin, @kevin: Thank you both -- I'll try your suggestions as soon\nas I get back to the mothership.\n\n@kevin: I hear you. (I'm deeply steeped in Ruby on Rails and\nfoolishly assume that it's easy to read.) With that in mind:\n\n\\d user_associations\n Table \"public.user_associations\"\n Column | Type |\nModifiers\n-------------+-----------------------------+---------------------------------------------------------\n id | integer | not null default\nnextval('followings_id_seq'::regclass)\n leader_id | integer |\n follower_id | integer |\n created_at | timestamp without time zone | not null\n updated_at | timestamp without time zone | not null\nIndexes:\n \"followings_pkey\" PRIMARY KEY, btree (id)\n \"index_followings_on_leader_id_and_follower_id\" UNIQUE, btree\n(leader_id, follower_id)\n \"index_followings_on_follower_id\" btree (follower_id)\n \"index_followings_on_leader_id\" btree (leader_id)\n",
"msg_date": "Mon, 19 Mar 2012 08:22:17 -0700",
"msg_from": "Robert Poor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow self-join query"
},
{
"msg_contents": "Robert Poor <[email protected]> wrote:\n \n> @kevin: I hear you. (I'm deeply steeped in Ruby on Rails and\n> foolishly assume that it's easy to read.) With that in mind:\n> \n> \\d user_associations\n \n> id | integer | not null default\n> nextval('followings_id_seq'::regclass)\n \nI assume that this is needed to keep RoR happy. Since a row seems\nmeaningless without both leader_id and follower_id, and that is\nunique, the synthetic key here is totally redundant. Performance\n(both modifying the table and querying against it) would be faster\nwithout this column, but I understand that many ORMs (including, as\nI recall, RoR) are more difficult to work with unless you have this.\n \n> leader_id | integer |\n> follower_id | integer |\n \nI'm surprised you didn't declare both of these as NOT NULL.\n \n> created_at | timestamp without time zone | not null\n> updated_at | timestamp without time zone | not null\n \nI can understand tracking when the follow was initiated, but what\nwould you ever update here? (Or is this part of a generalized\noptimistic concurrency control scheme?)\n \n> Indexes:\n> \"followings_pkey\" PRIMARY KEY, btree (id)\n> \"index_followings_on_leader_id_and_follower_id\" UNIQUE, btree\n> (leader_id, follower_id)\n> \"index_followings_on_follower_id\" btree (follower_id)\n> \"index_followings_on_leader_id\" btree (leader_id)\n \nThis last index is of dubious value when you already have an index\nwhich starts with leader_id. It will be of even more dubious\nbenefit when we have index-only scans in 9.2.\n \n-Kevin\n",
"msg_date": "Mon, 19 Mar 2012 11:45:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow self-join query"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello\n\nWe are having some performance problems with an application that uses\nprepared statement heavily.\n\nWe have found out that it creates-executes-destroys a prepared statement\n*per* statement it sends to the database (pg-9.1) via DBD-Pg.\n\nA normal log entry for a sql-statement looks e.g. like this:\n- ----------------------------------------------------------\n[2012-03-15 14:49:12.484 CET] LOG: duration: 8.440 ms parse\ndbdpg_p32048_3:\n\nSELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\nWHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\nACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\nCachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n(ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n\n[2012-03-15 14:49:12.485 CET] LOG: duration: 0.087 ms bind\ndbdpg_p32048_3:\n\nSELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\nWHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\nACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\nCachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n(ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n\n\n[2012-03-15 14:49:12.487 CET] LOG: duration: 1.692 ms execute\ndbdpg_p32048_3:\n\nSELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\nWHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\nACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\nCachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n(ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n\n\n[2012-03-15 14:49:12.488 CET] LOG: duration: 0.029 ms statement:\nDEALLOCATE dbdpg_p32048_3\n- ----------------------------------------------------------\n\nAs you can see, the parse+bind+deallocate part uses much more time than\nthe execution part. This is the same for many of the statements send to\nthe database.\n\nMy question is:\n\nIs the parse+bind time reported, a time (not reported) that the planer\nwill use anyway when running a sql-statement in a normal way or the\nparse+bind+deallocate time is *extra* time needed by the prepared statement?\n\nCan we assume that running this application without using prepared\nstatements will do that it runs faster the time used by\nparse+bind+deallocate?\n\nThanks in advance.\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.10 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n\niEYEARECAAYFAk9pubAACgkQBhuKQurGihTYkwCcCFYQRDGWD0yaR+f2FFwKs7gN\nRfgAoJdPrAzUhfBfsXmst7/l7LVLisHy\n=l7Fl\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 21 Mar 2012 12:21:23 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "DBD-Pg prepared statement versus plain execution"
},
{
"msg_contents": "Hi Rafael,\n\nTry disabling the prepare statement processing in DBD::Pg and\ntry the timing runs again.\n\nRegards,\nKen\n\nOn Wed, Mar 21, 2012 at 12:21:23PM +0100, Rafael Martinez wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Hello\n> \n> We are having some performance problems with an application that uses\n> prepared statement heavily.\n> \n> We have found out that it creates-executes-destroys a prepared statement\n> *per* statement it sends to the database (pg-9.1) via DBD-Pg.\n> \n> A normal log entry for a sql-statement looks e.g. like this:\n> - ----------------------------------------------------------\n> [2012-03-15 14:49:12.484 CET] LOG: duration: 8.440 ms parse\n> dbdpg_p32048_3:\n> \n> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\n> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\n> ACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\n> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n> AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n> \n> [2012-03-15 14:49:12.485 CET] LOG: duration: 0.087 ms bind\n> dbdpg_p32048_3:\n> \n> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\n> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\n> ACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\n> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n> AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n> \n> \n> [2012-03-15 14:49:12.487 CET] LOG: duration: 1.692 ms execute\n> dbdpg_p32048_3:\n> \n> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\n> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\n> ACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\n> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n> AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n> \n> \n> [2012-03-15 14:49:12.488 CET] LOG: duration: 0.029 ms statement:\n> DEALLOCATE dbdpg_p32048_3\n> - ----------------------------------------------------------\n> \n> As you can see, the parse+bind+deallocate part uses much more time than\n> the execution part. This is the same for many of the statements send to\n> the database.\n> \n> My question is:\n> \n> Is the parse+bind time reported, a time (not reported) that the planer\n> will use anyway when running a sql-statement in a normal way or the\n> parse+bind+deallocate time is *extra* time needed by the prepared statement?\n> \n> Can we assume that running this application without using prepared\n> statements will do that it runs faster the time used by\n> parse+bind+deallocate?\n> \n> Thanks in advance.\n> \n> regards,\n> - -- \n> Rafael Martinez Guerrero\n> Center for Information Technology\n> University of Oslo, Norway\n> \n> PGP Public Key: http://folk.uio.no/rafael/\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.10 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n> \n> iEYEARECAAYFAk9pubAACgkQBhuKQurGihTYkwCcCFYQRDGWD0yaR+f2FFwKs7gN\n> RfgAoJdPrAzUhfBfsXmst7/l7LVLisHy\n> =l7Fl\n> -----END PGP SIGNATURE-----\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n",
"msg_date": "Wed, 21 Mar 2012 07:59:30 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DBD-Pg prepared statement versus plain execution"
},
{
"msg_contents": "On 3/21/2012 6:21 AM, Rafael Martinez wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hello\n>\n> We are having some performance problems with an application that uses\n> prepared statement heavily.\n>\n> We have found out that it creates-executes-destroys a prepared statement\n> *per* statement it sends to the database (pg-9.1) via DBD-Pg.\n>\n> A normal log entry for a sql-statement looks e.g. like this:\n> - ----------------------------------------------------------\n> [2012-03-15 14:49:12.484 CET] LOG: duration: 8.440 ms parse\n> dbdpg_p32048_3:\n>\n> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\n> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\n> ACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\n> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n> AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n>\n> [2012-03-15 14:49:12.485 CET] LOG: duration: 0.087 ms bind\n> dbdpg_p32048_3:\n>\n> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\n> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\n> ACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\n> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n> AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n>\n>\n> [2012-03-15 14:49:12.487 CET] LOG: duration: 1.692 ms execute\n> dbdpg_p32048_3:\n>\n> SELECT DISTINCT ACL.RightName FROM ACL, Principals, CachedGroupMembers\n> WHERE Principals.id = ACL.PrincipalId AND Principals.PrincipalType =\n> 'Group' AND Principals.Disabled = 0 AND CachedGroupMembers.GroupId =\n> ACL.PrincipalId AND CachedGroupMembers.GroupId = Principals.id AND\n> CachedGroupMembers.MemberId = 19312 AND CachedGroupMembers.Disabled = 0\n> AND ((ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1) OR\n> (ACL.ObjectType = 'RT::System' AND ACL.ObjectId = 1))\n>\n>\n> [2012-03-15 14:49:12.488 CET] LOG: duration: 0.029 ms statement:\n> DEALLOCATE dbdpg_p32048_3\n> - ----------------------------------------------------------\n>\n> As you can see, the parse+bind+deallocate part uses much more time than\n> the execution part. This is the same for many of the statements send to\n> the database.\n>\n> My question is:\n>\n> Is the parse+bind time reported, a time (not reported) that the planer\n> will use anyway when running a sql-statement in a normal way or the\n> parse+bind+deallocate time is *extra* time needed by the prepared statement?\n>\n> Can we assume that running this application without using prepared\n> statements will do that it runs faster the time used by\n> parse+bind+deallocate?\n>\n> Thanks in advance.\n>\n> regards,\n> - --\n> Rafael Martinez Guerrero\n> Center for Information Technology\n> University of Oslo, Norway\n>\n> PGP Public Key: http://folk.uio.no/rafael/\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.10 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n>\n> iEYEARECAAYFAk9pubAACgkQBhuKQurGihTYkwCcCFYQRDGWD0yaR+f2FFwKs7gN\n> RfgAoJdPrAzUhfBfsXmst7/l7LVLisHy\n> =l7Fl\n> -----END PGP SIGNATURE-----\n>\n\nWhat does your perl look like? This would be wrong:\n\nfor $key (@list)\n{\n my $q = $db->prepare('select a from b where c = $1');\n $q->execute($key);\n $result = $q->fetch;\n}\n\n\nThis would be right:\n\nmy $q = $db->prepare('select a from b where c = $1');\nfor $key (@list)\n{\n $q->execute($key);\n $result = $q->fetch;\n}\n\n-Andy\n",
"msg_date": "Wed, 21 Mar 2012 08:44:29 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DBD-Pg prepared statement versus plain execution"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello\n\nI am wondering why the time reported by \\timing in psql is not the same\nas the time reported by duration in the log file when log_duration or\nlog_min_duration_statement are on?. I can not find any information about\nthis in the documentation.\n\ne.g.\n- -----------------------------------\nver=# SELECT * from version ;\nTime: 0.450 ms\n\n2012-03-20 16:10:16 CET 29119 LOG: duration: 0.313 ms statement:\nSELECT * from version ;\n- -----------------------------------\n\nver=# PREPARE foo AS SELECT * from version ;\nPREPARE\nTime: 0.188 ms\n\nver=# EXECUTE foo;\nTime: 0.434 ms\n\nver=# DEALLOCATE foo;\nDEALLOCATE\nTime: 0.115 ms\n\n2012-03-20 16:12:21 CET 29119 LOG: duration: 0.127 ms statement:\nPREPARE foo AS SELECT * from version ;\n2012-03-20 16:12:37 CET 29119 LOG: duration: 0.303 ms statement:\nEXECUTE foo;\n2012-03-20 16:13:24 CET 29119 LOG: duration: 0.055 ms statement:\nDEALLOCATE foo;\n- -----------------------------------\n\nThanks in advance\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.10 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n\niEYEARECAAYFAk9pvoUACgkQBhuKQurGihRf3gCfRMv5dQnNA8f/gjcPMv6OPrGz\nqHoAn0PPgN1OYMBDQqJes3kRBxH//Y95\n=rsAY\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 21 Mar 2012 12:42:00 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "timing != log duration"
},
{
"msg_contents": "On Wed, Mar 21, 2012 at 13:42, Rafael Martinez <[email protected]> wrote:\n> I am wondering why the time reported by \\timing in psql is not the same\n> as the time reported by duration in the log file when log_duration or\n> log_min_duration_statement are on?\n\npsql's \\timing measures time on the client -- which includes the\nnetwork communication time (time to send the query to the server, and\nreceive back the results)\n\nlog_min_duration_statement measures time on the server, so it doesn't\nknow how long network transmission takes.\n\nRegards,\nMarti\n",
"msg_date": "Wed, 21 Mar 2012 13:53:05 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timing != log duration"
},
{
"msg_contents": "Rafael Martinez <[email protected]> writes:\n> I am wondering why the time reported by \\timing in psql is not the same\n> as the time reported by duration in the log file when log_duration or\n> log_min_duration_statement are on?\n\nNetwork transmission delays, perhaps? psql reports the elapsed time\nseen at the client, which is necessarily going to be somewhat more than\nthe time taken by the server.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 21 Mar 2012 10:11:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timing != log duration "
},
{
"msg_contents": "Rafael Martinez wrote:\n> I am wondering why the time reported by \\timing in psql is not the\nsame\n> as the time reported by duration in the log file when log_duration or\n> log_min_duration_statement are on?. I can not find any information\nabout\n> this in the documentation.\n\n\\timing measures the time on the client, while the log contains the\nduration\non the server side. The client time includes the overhead for\ntransferring\ndata to and from the server.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Wed, 21 Mar 2012 15:48:09 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timing != log duration"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're running a web-based application powered by PostgreSQL. Recently,\nwe've developed a \"new\" separate Java-based standalone (daemon process)\nthreaded program that performs both read and write operations heavily on 2\n\"huge\" tables. One table has got 5.4 million records and other has 1.3\nmillion records. Moreover, more than one read and/or write operations may\nbe executing concurrently.\n\nThe issue that we're facing currently in our Production server is, whenever\nthis \"newly\" developed Java program is started/run, then immediately the\nentire web application becomes very slow in response. At this time, I could\nalso see from the output of \" iostat -tx\" that \"%util\" is even crossing more\nthan 80%. So, what I could infer here based on my knowledge is, this is\ncreating heavy IO traffic because of write operation. Since it was entirely\nslowing down web application, we've temporarily stopped running this\nstandalone application.\n\nMeantime, I also read about \"checkpoint spikes\" could be a reason for slow\ndown in \"write workload\" database. I'm also reading that starting in\nPostgreSQL 8.3, we can get verbose logging of the checkpoint process by\nturning on \"log_checkpoints\".\n\nMy question is, how do I determine whether \"checkpoint\" occurrences are the\nroot cause of this slowdown in my case? We're running PostgreSQL v8.2.22 on\nCentOS5.2 having 35 GB RAM. \"log_checkpoints\" is not available in\nPostgreSQL v8.2.22.\n\nWe want to optimize our Production database to handle both reads and writes,\nany suggestions/advice/guidelines on this are highly appreciated.\n\nSome important \"postgresql.conf\" parameters are:\n# - Memory -\nshared_buffers=1536MB\n\n# - Planner Cost Constants -\neffective_cache_size = 4GB\n\n# - Checkpoints -\ncheckpoint_segments=32\ncheckpoint_timeout=5min\ncheckpoint_warning=270s\n\n# - Background writer -\nbgwriter_delay = 200ms\nbgwriter_lru_percent = 1.0\nbgwriter_lru_maxpages = 5\nbgwriter_all_percent = 0.333\nbgwriter_all_maxpages = 5\n\nRegards,\nGnanam\n\n\n",
"msg_date": "Thu, 22 Mar 2012 12:57:13 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "Check for next messages in your log:\nLOG: checkpoints are occurring too frequently (ZZZ seconds apart)\nHINT: Consider increasing the configuration parameter \"checkpoint_segments\".\n\nBest regards, Vitalii Tymchyshyn\n\n22.03.12 09:27, Gnanakumar написав(ла):\n> Hi,\n>\n> We're running a web-based application powered by PostgreSQL. Recently,\n> we've developed a \"new\" separate Java-based standalone (daemon process)\n> threaded program that performs both read and write operations heavily on 2\n> \"huge\" tables. One table has got 5.4 million records and other has 1.3\n> million records. Moreover, more than one read and/or write operations may\n> be executing concurrently.\n>\n> The issue that we're facing currently in our Production server is, whenever\n> this \"newly\" developed Java program is started/run, then immediately the\n> entire web application becomes very slow in response. At this time, I could\n> also see from the output of \" iostat -tx\" that \"%util\" is even crossing more\n> than 80%. So, what I could infer here based on my knowledge is, this is\n> creating heavy IO traffic because of write operation. Since it was entirely\n> slowing down web application, we've temporarily stopped running this\n> standalone application.\n>\n> Meantime, I also read about \"checkpoint spikes\" could be a reason for slow\n> down in \"write workload\" database. I'm also reading that starting in\n> PostgreSQL 8.3, we can get verbose logging of the checkpoint process by\n> turning on \"log_checkpoints\".\n>\n> My question is, how do I determine whether \"checkpoint\" occurrences are the\n> root cause of this slowdown in my case? We're running PostgreSQL v8.2.22 on\n> CentOS5.2 having 35 GB RAM. \"log_checkpoints\" is not available in\n> PostgreSQL v8.2.22.\n\n\n",
"msg_date": "Thu, 22 Mar 2012 11:42:22 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "On 22 Březen 2012, 10:42, Vitalii Tymchyshyn wrote:\n> Check for next messages in your log:\n> LOG: checkpoints are occurring too frequently (ZZZ seconds apart)\n> HINT: Consider increasing the configuration parameter\n> \"checkpoint_segments\".\n>\n> Best regards, Vitalii Tymchyshyn\n>\n> 22.03.12 09:27, Gnanakumar написав(ла):\n>> Hi,\n>>\n>> We're running a web-based application powered by PostgreSQL. Recently,\n>> we've developed a \"new\" separate Java-based standalone (daemon process)\n>> threaded program that performs both read and write operations heavily on\n>> 2\n>> \"huge\" tables. One table has got 5.4 million records and other has 1.3\n>> million records. Moreover, more than one read and/or write operations\n>> may\n>> be executing concurrently.\n>>\n>> The issue that we're facing currently in our Production server is,\n>> whenever\n>> this \"newly\" developed Java program is started/run, then immediately the\n>> entire web application becomes very slow in response. At this time, I\n>> could\n>> also see from the output of \" iostat -tx\" that \"%util\" is even crossing\n>> more\n>> than 80%. So, what I could infer here based on my knowledge is, this is\n>> creating heavy IO traffic because of write operation. Since it was\n>> entirely\n>> slowing down web application, we've temporarily stopped running this\n>> standalone application.\n\nI'd say you should investigate what the application actually does. The\nchances are it's poorly written, issuing a lot of queries and causing a\nlog of IO.\n\nAnd 80% utilization does not mean the operations need to be writes - it's\nabout IO operations, i.e. both reads and writes.\n\n>> Meantime, I also read about \"checkpoint spikes\" could be a reason for\n>> slow\n>> down in \"write workload\" database. I'm also reading that starting in\n>> PostgreSQL 8.3, we can get verbose logging of the checkpoint process by\n>> turning on \"log_checkpoints\".\n>>\n>> My question is, how do I determine whether \"checkpoint\" occurrences are\n>> the\n>> root cause of this slowdown in my case? We're running PostgreSQL\n>> v8.2.22 on\n>> CentOS5.2 having 35 GB RAM. \"log_checkpoints\" is not available in\n>> PostgreSQL v8.2.22.\n\nThere's a checkpoint_warning option. Set it to 3600 and you should get\nmessages in the log. Correlate those to the issues (do they happen at the\nsame time?). Sadly, 8.2 doesn't have any of the nice statistic views :-(\n\nCheck this: http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nIt talks about improvements in 8.3 but it mentions older version too.\n\nIf you can, install iotop and watch the processes that cause the I/O.\nIIRC, the title of the process should say 'CHECKPOINT' etc. But if the\nissues disappear once the application is stopped, it's unlikely the\ncheckpoints are the issue.\n\nWhat we need is more details about your setup, especially\n\n - checkpoint_segments\n - checkpoint_timeout\n - shared_buffers\n\nalso it'd be nice to have samples from the vmstat/iostat and messages from\nthe log.\n\nkind regards\nTomas\n\n\n",
"msg_date": "Thu, 22 Mar 2012 11:04:14 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
},
{
"msg_contents": "> There's a checkpoint_warning option. Set it to 3600 and you should get\n> messages in the log. Correlate those to the issues (do they happen at the\n> same time?).\nAfter setting \"checkpoint_warning\" to 3600, can you explain on how do I correlate with the messages?\n\n> If you can, install iotop and watch the processes that cause the I/O.\nI tried installing \"iotop\", but it failed to run because it requires Linux >= 2.6.20. Our CentOS5.2 is 2.6.18-8.\n\n> What we need is more details about your setup, especially\n> - checkpoint_segments\n> - checkpoint_timeout\n> - shared_buffers\n# - Memory -\nshared_buffers=1536MB\n\n# - Planner Cost Constants -\neffective_cache_size = 4GB\n\n# - Checkpoints -\ncheckpoint_segments=32\ncheckpoint_timeout=5min\ncheckpoint_warning=270s\n\n# - Background writer -\nbgwriter_delay = 200ms\nbgwriter_lru_percent = 1.0\nbgwriter_lru_maxpages = 5\nbgwriter_all_percent = 0.333\nbgwriter_all_maxpages = 5\n\n> also it'd be nice to have samples from the vmstat/iostat and messages from\n> the log.\nUnfortunately, I don't have \"exact\" logs when the problem actually happened\n\n\n",
"msg_date": "Thu, 22 Mar 2012 16:02:00 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "On 22 Březen 2012, 11:32, Gnanakumar wrote:\n>> There's a checkpoint_warning option. Set it to 3600 and you should get\n>> messages in the log. Correlate those to the issues (do they happen at\n>> the\n>> same time?).\n> After setting \"checkpoint_warning\" to 3600, can you explain on how do I\n> correlate with the messages?\n\nWell, you do know when the issues happened, so that you can check the logs\nand see if there are messages at that time. Try to install the application\nand watch the logs / IO performance.\n\n>> If you can, install iotop and watch the processes that cause the I/O.\n> I tried installing \"iotop\", but it failed to run because it requires Linux\n> >= 2.6.20. Our CentOS5.2 is 2.6.18-8.\n>\n>> What we need is more details about your setup, especially\n>> - checkpoint_segments\n>> - checkpoint_timeout\n>> - shared_buffers\n> # - Memory -\n> shared_buffers=1536MB\n>\n> # - Planner Cost Constants -\n> effective_cache_size = 4GB\n\nSo, what else is running on the system? Because if there's 35GB RAM and\nthe shared buffers are 1.5GB, then there's about 33GB for page cache.\nSomething like 16GB would be a conservative setting.\n\nI'm not saying this will fix the issues, but maybe it shows that something\nelse is running on the box and maybe that's the culprit, not PostgreSQL?\n\n>> also it'd be nice to have samples from the vmstat/iostat and messages\n>> from\n>> the log.\n> Unfortunately, I don't have \"exact\" logs when the problem actually\n> happened\n\nThen install the app again and collect as much info as possible. Otherwise\nit's all just wild guesses.\n\nTomas\n\n",
"msg_date": "Thu, 22 Mar 2012 11:43:08 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
},
{
"msg_contents": "> So, what else is running on the system? Because if there's 35GB RAM and\n> the shared buffers are 1.5GB, then there's about 33GB for page cache.\n> Something like 16GB would be a conservative setting.\n\nYes, you guessed it right. Both Web server and DB server are running in the same machine.\n\n> I'm not saying this will fix the issues, but maybe it shows that something\n> else is running on the box and maybe that's the culprit, not PostgreSQL?\n\nBased on my observations and analysis, I'm sure that database write operation \"is\" causing the slowdown, but not because of other applications running in the same server.\n\n> Then install the app again and collect as much info as possible. Otherwise\n> it's all just wild guesses.\n\nOK.\n\n\n",
"msg_date": "Thu, 22 Mar 2012 17:40:29 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "> There's a checkpoint_warning option. Set it to 3600 and you should get messages in the log.\n\nI've a basic question about setting \"checkpoint_warning\" configuration. 8.2 doc (http://www.postgresql.org/docs/8.2/interactive/runtime-config-wal.html) says:\n\n\"Write a message to the server log if checkpoints caused by the filling of checkpoint segment files happen closer together than this many seconds (which suggests that checkpoint_segments ought to be raised). The default is 30 seconds (30s).\"\n\nHow does increasing the default 30s to 3600s (which is 1 hour or 60 minutes) print messages to the log? Even after reading the description from above doc, am not able to get this point clearly. Can you help me in understanding this?\n\n\n",
"msg_date": "Thu, 22 Mar 2012 18:08:18 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "On 22 Březen 2012, 13:38, Gnanakumar wrote:\n>> There's a checkpoint_warning option. Set it to 3600 and you should get\n>> messages in the log.\n>\n> I've a basic question about setting \"checkpoint_warning\" configuration.\n> 8.2 doc\n> (http://www.postgresql.org/docs/8.2/interactive/runtime-config-wal.html)\n> says:\n>\n> \"Write a message to the server log if checkpoints caused by the filling of\n> checkpoint segment files happen closer together than this many seconds\n> (which suggests that checkpoint_segments ought to be raised). The default\n> is 30 seconds (30s).\"\n>\n> How does increasing the default 30s to 3600s (which is 1 hour or 60\n> minutes) print messages to the log? Even after reading the description\n> from above doc, am not able to get this point clearly. Can you help me in\n> understanding this?\n\nA checkpoint may be triggered for two main reasons:\n\n (1) all available WAL segments are filled (you do have 32 of them, i.e.\n 512MB of WAL data)\n\n (2) the checkpoint_timeout runs out (by default 5 mins IIRC)\n\nThe checkpoint_warning should emmit a 'warning' message whenever the\ncheckpoint happens less than the specified number of seconds since the\nlast one, so setting it high enough should log all checkpoints.\n\nThe fact that this does not happen (no warning messages) suggests this is\nnot caused by checkpoints. It may be caused by the database, but it seems\nunlikely it's checkpoints.\n\nTomas\n\n",
"msg_date": "Thu, 22 Mar 2012 13:52:39 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
},
{
"msg_contents": "On 22 Březen 2012, 13:10, Gnanakumar wrote:\n>> So, what else is running on the system? Because if there's 35GB RAM and\n>> the shared buffers are 1.5GB, then there's about 33GB for page cache.\n>> Something like 16GB would be a conservative setting.\n>\n> Yes, you guessed it right. Both Web server and DB server are running in\n> the same machine.\n>\n>> I'm not saying this will fix the issues, but maybe it shows that\n>> something\n>> else is running on the box and maybe that's the culprit, not PostgreSQL?\n>\n> Based on my observations and analysis, I'm sure that database write\n> operation \"is\" causing the slowdown, but not because of other applications\n> running in the same server.\n\nYou haven't provided any clear evidence for such statement so far. Let's\nwait for the vmstat/iostat logs etc.\n\nTomas\n\n",
"msg_date": "Thu, 22 Mar 2012 13:55:28 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
},
{
"msg_contents": "\"Gnanakumar\" <[email protected]> wrote:\n \n> We're running a web-based application powered by PostgreSQL. \n> Recently, we've developed a \"new\" separate Java-based standalone\n> (daemon process) threaded program that performs both read and\n> write operations heavily on 2 \"huge\" tables. One table has got\n> 5.4 million records and other has 1.3 million records. Moreover,\n> more than one read and/or write operations may be executing\n> concurrently.\n \nWe're running a web application using PostgreSQL and Java which has\n80 tables with over 1 million records each, the largest of which has\n212 million rows. It is updated by replication from 3000 directly\nattached users at 72 sites, using a multi-threaded Java application.\nWe have one connection pool for the read-only web application,\nwhich allows about 30 concurrent requests, and a connection pool for\nthe replication which allows 6.\n \nIf you want a peek at our performance, you can access the site here:\nhttp://wcca.wicourts.gov/ -- if you view a case and click on the\n\"Court Record Events\" button, you'll be viewing records in the table\nwith 212 million rows.\n \nMy point is that you're not asking PostgreSQL to do anything it\n*can't* handle well.\n \n> The issue that we're facing currently in our Production server is,\n> whenever this \"newly\" developed Java program is started/run, then\n> immediately the entire web application becomes very slow in\n> response. At this time, I could also see from the output of \"\n> iostat -tx\" that \"%util\" is even crossing more than 80%. So, what\n> I could infer here based on my knowledge is, this is creating\n> heavy IO traffic because of write operation. Since it was\n> entirely slowing down web application, we've temporarily stopped\n> running this standalone application.\n \nHow are you handling concurrency? (Are you using FOR SHARE on your\nSELECT statements? Are you explicitly acquiring table locks before\nmodifying data? Etc.) You might be introducing blocking somehow. \nWhen things are slow, try running some of the queries show on this\npage to get more clues:\n \nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\n \nIn particular, I recommend that you *never* leave transactions open\nor hold locks while waiting for user response or input. They *will*\nanswer phone calls or go to lunch with things pending, potentially\nblocking other users for extended periods.\n \n> Meantime, I also read about \"checkpoint spikes\" could be a reason\n> for slow down in \"write workload\" database.\n \nWhen you hit that issue, there is not a continual slowdown --\nqueries which normally run very fast (a small fraction of a second)\nmay periodically all take tens of seconds. Is that the pattern\nyou're seeing?\n \n> We're running PostgreSQL v8.2.22 on CentOS5.2 having 35 GB RAM. \n> \"log_checkpoints\" is not available in PostgreSQL v8.2.22.\n> \n> We want to optimize our Production database to handle both reads\n> and writes, any suggestions/advice/guidelines on this are highly\n> appreciated.\n \nSupport for 8.2 was dropped last year, five years after it was\nreleased. PostgreSQL has had a new major release every year since\n8.2 was released, many of which have provided dramatic performance\nimprovements. If you want good performance my first suggestion\nwould be to upgrade your version of PostgreSQL to at least 9.0, and\npreferably 9.1. Because of stricter typing in 8.3 and later,\nupgrading from 8.2 takes a bit more work than most PostgreSQL major\nreleases. Be sure to test well.\n \n> # - Background writer -\n> bgwriter_delay = 200ms\n> bgwriter_lru_percent = 1.0\n> bgwriter_lru_maxpages = 5\n> bgwriter_all_percent = 0.333\n> bgwriter_all_maxpages = 5\n \nThese settings result in a very slow dribble of dirty buffers out to\nthe OS cache. *If* you're hitting the \"checkpoint spikes\" issue\n(see above), you might want to boost the aggressiveness of the\nbackground writer. I couldn't recommend settings without knowing a\nlot more about your storage system and its capabilities. In\nsupported releases of PostgreSQL, the checkpoint system and\nbackground writer are much improved, so again -- upgrading would be\nthe most effective way to solve the problem.\n \nBesides the outdated PostgreSQL release and possible blocking, I\nwould be concerned if you are using any sort of ORM for the update\napplication. You want to watch that very closely because the\ndefault behavior of many of them does not scale well. There's\nusually a way to get better performance through configuration and/or\nbypassing automatic query generation for complex data requests.\n \n-Kevin\n",
"msg_date": "Thu, 22 Mar 2012 10:13:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
},
{
"msg_contents": "On Thu, Mar 22, 2012 at 10:13 AM, Kevin Grittner\n<[email protected]> wrote:\n> In particular, I recommend that you *never* leave transactions open\n> or hold locks while waiting for user response or input. They *will*\n> answer phone calls or go to lunch with things pending, potentially\n> blocking other users for extended periods.\n\nThis -- transactions aren't meant to intentionally block users but to\nallow you to ensure data remains in a valid state in the face of\nconcurrent activity. In-transaction locks should be as short as\npossible while still giving those guarantees and should hopefully\nnever be user facing. If you want user controlled locks, they should\nbe cooperative and non transactional -- advisory locks.\n\nhttp://www.postgresql.org/docs/current/static/explicit-locking.html#ADVISORY-LOCKS\n\nmerlin\n",
"msg_date": "Thu, 22 Mar 2012 11:18:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "On 22/03/12 20:27, Gnanakumar wrote:\n>\n> The issue that we're facing currently in our Production server is, whenever\n> this \"newly\" developed Java program is started/run, then immediately the\n> entire web application becomes very slow in response. At this time, I could\n> also see from the output of \" iostat -tx\" that \"%util\" is even crossing more\n> than 80%. So, what I could infer here based on my knowledge is, this is\n> creating heavy IO traffic because of write operation. Since it was entirely\n> slowing down web application, we've temporarily stopped running this\n> standalone application.\n\nI'd recommend taking a hard took at what the Java app is doing. You \nmight be able to get useful data by adding:\n\nlog_min_duration_statement = 10000 # queries taking longer than 10 sec\n\ninto your postgresql.conf. I'd *guess* that locking is not the issue - \nas that is unlikely to cause high IO load (altho checking for lots of \nlocks with granted= f in pg_locks might be worthwhile just to rule it out).\n\nYou could also try running the Java app in one of your development \nenvironments to see if you can provoke high load behaviour in a more \neasily studied environment.\n\nregards\n\nMark\n",
"msg_date": "Fri, 23 Mar 2012 13:08:53 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "First off, thank you *so much* for that detailed explanation comparing with\na real-time application performance benchmark, which was really enlightening\nfor me.\n\n> How are you handling concurrency? (Are you using FOR SHARE on your\n> SELECT statements? Are you explicitly acquiring table locks before\n> modifying data? Etc.) You might be introducing blocking somehow. \n\nNo, actually am not explicitly locking any tables -- all are *simple*\nselect, update, insert statements only.\n\n> In particular, I recommend that you *never* leave transactions open\n> or hold locks while waiting for user response or input.\n\nAgain, we're not leaving any transaction opens until for any user responses,\netc.\n\n> When you hit that issue, there is not a continual slowdown --\n> queries which normally run very fast (a small fraction of a second)\n> may periodically all take tens of seconds. Is that the pattern\n> you're seeing?\n\nYes, you're correct. Queries those normally run fast are becoming slow at\nthe time of this slowdown.\n\n> Besides the outdated PostgreSQL release and possible blocking, I\n> would be concerned if you are using any sort of ORM for the update\n> application. You want to watch that very closely because the\n> default behavior of many of them does not scale well. There's\n> usually a way to get better performance through configuration and/or\n> bypassing automatic query generation for complex data requests.\n\nAm not able to understand above statements (...any sort of ORM for the\nupdate application ...) clearly. Can you help me in understanding this?\n\n\n\n",
"msg_date": "Fri, 23 Mar 2012 15:40:05 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "On 23 Březen 2012, 11:10, Gnanakumar wrote:\n> First off, thank you *so much* for that detailed explanation comparing\n> with\n> a real-time application performance benchmark, which was really\n> enlightening\n> for me.\n>\n>> How are you handling concurrency? (Are you using FOR SHARE on your\n>> SELECT statements? Are you explicitly acquiring table locks before\n>> modifying data? Etc.) You might be introducing blocking somehow.\n>\n> No, actually am not explicitly locking any tables -- all are *simple*\n> select, update, insert statements only.\n\nAre those wrapped in a transaction or not? Each transaction forces a fsync\nwhen committing, and if each of those INSERT/UPDATE statements stands on\nit's own it may cause of lot of I/O.\n\n>> Besides the outdated PostgreSQL release and possible blocking, I\n>> would be concerned if you are using any sort of ORM for the update\n>> application. You want to watch that very closely because the\n>> default behavior of many of them does not scale well. There's\n>> usually a way to get better performance through configuration and/or\n>> bypassing automatic query generation for complex data requests.\n>\n> Am not able to understand above statements (...any sort of ORM for the\n> update application ...) clearly. Can you help me in understanding this?\n\nThere are tools that claim to remove the object vs. relational discrepancy\nwhen accessing the database. They often generate queries on the fly, and\nsome of the queries are pretty awful (depends on how well the ORM model is\ndefined). There are various reasons why this may suck - loading too much\ndata, using lazy fetch everywhere etc.\n\nAre you using something like Hibernate, JPA, ... to handle persistence?\n\nTomas\n\n",
"msg_date": "Fri, 23 Mar 2012 11:24:12 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
},
{
"msg_contents": "> Are those wrapped in a transaction or not? Each transaction forces a fsync\n> when committing, and if each of those INSERT/UPDATE statements stands on\n> it's own it may cause of lot of I/O.\n\nYes, it's wrapped inside a transaction. May be this could be a reason for slowdown, as you've highlighted here. Atleast, we've got some guidance here to troubleshoot in this aspect also.\n\n> There are tools that claim to remove the object vs. relational discrepancy\n> when accessing the database. They often generate queries on the fly, and\n> some of the queries are pretty awful (depends on how well the ORM model is\n> defined). There are various reasons why this may suck - loading too much\n> data, using lazy fetch everywhere etc.\n\nThanks for the clarification.\n\n> Are you using something like Hibernate, JPA, ... to handle persistence?\n\nNo, we're not using any persistence frameworks/libraries as such.\n\n\n",
"msg_date": "Fri, 23 Mar 2012 17:55:37 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Write workload is causing severe slowdown in Production"
},
{
"msg_contents": "\"Gnanakumar\" <[email protected]> wrote:\n \n>> When you hit that issue, there is not a continual slowdown --\n>> queries which normally run very fast (a small fraction of a\n>> second) may periodically all take tens of seconds. Is that the\n>> pattern you're seeing?\n> \n> Yes, you're correct. Queries those normally run fast are becoming\n> slow at the time of this slowdown.\n \nBut the question is -- while the update application is running is\nperformance *usually* good with *brief periods* of high latency, or\ndoes it just get bad and stay bad? The *pattern* is the clue as to\nwhether it is likely to be write saturation.\n \nHere's something I would recommend as a diagnostic step: run `vmstat\n1` (or equivalent, based on your OS) to capture about a minute's\nworth of activity while things are running well, and also while\nthings are slow. Pick a few lines that are \"typical\" of each and\npaste them into a post here. (If there is a lot of variation over\nthe sample, it may be best to attach them to your post in their\nentirety. Don't just paste in more than a few lines of vmstat\noutput, as the wrapping would make it hard to read.)\n \nAlso, you should try running queries from this page when things are\nslow:\n \nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\n \nIf there is any blocking, that might be interesting.\n \n-Kevin\n",
"msg_date": "Fri, 23 Mar 2012 09:09:10 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Write workload is causing severe slowdown in\n Production"
}
] |
[
{
"msg_contents": "Hi,\n\nwe are currently seeing some strange performance issues related to our Postgresql Database. Our Setup currently contains:\n- 1 Master with 32GB Ram and 6 x 100GB SSDs in RAID10 and 2 Quad Core Intel Processors (this one has a failover Box, the data volume is shared via DRBD)\n- 2 Slaves with 16GB Ram and 6 x 100GB SAS Disks in RAID 10 and 1 Quad Core Processor connected via streaming replication\n\nWe currently use Postgresql 9.0.6 from Debian Squeeze Backports with a 2.6.39-bpo.2 Backports Squeeze Kernel. \nAll Servers use Pgbouncer as a connection Pooler, which is installed on each box. \nIn times of higher usage, we see some strange issues on our Master box, the connections start stacking up in an \"<idle in transaction>\" state, and the Queries get slower and slower when using the Master Server. We traced the Application which is connected via a private LAN, and could not find any hangups that could cause these states in the Database. During this time, the load of the Master goes up a bit, but the CPU Usage and IOwait is still quite low at around a Load of 5-8. The usual Load is around 1 - 1.5. \n\n19:14:28.654 4838 LOG Stats: 3156 req/s, in 1157187 b/s, out 1457656 b/s,query 6119 us\n19:15:28.655 4838 LOG Stats: 3247 req/s, in 1159833 b/s, out 1421552 b/s,query 5025 us\n19:16:28.660 4838 LOG Stats: 3045 req/s, in 1096349 b/s, out 1377927 b/s,query 3713 us\n19:17:28.680 4838 LOG Stats: 2779 req/s, in 1030783 b/s, out 1343547 b/s,query 11977 us\n19:18:28.688 4838 LOG Stats: 1723 req/s, in 664282 b/s, out 789989 b/s,query 67144 us\n19:19:28.665 4838 LOG Stats: 1371 req/s, in 472587 b/s, out 622347 b/s,query 48417 us\n19:20:28.668 4838 LOG Stats: 2161 req/s, in 748727 b/s, out 995794 b/s,query 2794 us\n\n\nAs you can see in the pgbouncer logs, the query exec time shoots up. \nWe took a close look at locking issues during that time, but we don't see any excessive amount of locking during that time.\nThe issue suddenly popped up, we had times of higher usage before and the Postgresql DB handled it without any problems. We also did not recently change anything in this setup. We also did take a look at the Slow Queries Log during that time. This did now show anything unusual during the time of the slowdown. \n\nDoes anyone have any idea what could cause this issue or how we can further debug it?\n\nThanks for your Input!\n\nSebastian\n\n",
"msg_date": "Thu, 22 Mar 2012 23:52:49 +0100",
"msg_from": "Sebastian Melchior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "* Sebastian Melchior ([email protected]) wrote:\n> Does anyone have any idea what could cause this issue or how we can further debug it?\n\nAre you logging checkpoints? If not, you should, if so, then see if\nthey correllate to the time of the slowdown..?\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 22 Mar 2012 20:47:49 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "Hi,\n\nyeah we log those, those times do not match the times of the slowdown at all. Seems to be unrelated.\n\nSebastian\n\n\nOn 23.03.2012, at 01:47, Stephen Frost wrote:\n\n> * Sebastian Melchior ([email protected]) wrote:\n>> Does anyone have any idea what could cause this issue or how we can further debug it?\n> \n> Are you logging checkpoints? If not, you should, if so, then see if\n> they correllate to the time of the slowdown..?\n> \n> \tThanks,\n> \n> \t\tStephen\n\n",
"msg_date": "Fri, 23 Mar 2012 05:37:50 +0100",
"msg_from": "Sebastian Melchior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "I'd suggest the handy troubleshooting tools sar, iostat, vmstat and iotop\n\nOn Thu, Mar 22, 2012 at 10:37 PM, Sebastian Melchior <[email protected]> wrote:\n> Hi,\n>\n> yeah we log those, those times do not match the times of the slowdown at all. Seems to be unrelated.\n>\n> Sebastian\n>\n>\n> On 23.03.2012, at 01:47, Stephen Frost wrote:\n>\n>> * Sebastian Melchior ([email protected]) wrote:\n>>> Does anyone have any idea what could cause this issue or how we can further debug it?\n>>\n>> Are you logging checkpoints? If not, you should, if so, then see if\n>> they correllate to the time of the slowdown..?\n>>\n>> Thanks,\n>>\n>> Stephen\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Thu, 22 Mar 2012 22:48:12 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "Hi,\n\nwe already used iostat and iotop during times of the slowdown, there is no sudden drop in I/O workload in the times of the slowdown. Also the iowait does not spike and stays as before.\nSo i do not think that this is I/O related. As the disks are SSDs there also still is some \"head room\" left. \n\nSebastian\n\nOn 23.03.2012, at 05:48, Scott Marlowe wrote:\n\n> I'd suggest the handy troubleshooting tools sar, iostat, vmstat and iotop\n> \n> On Thu, Mar 22, 2012 at 10:37 PM, Sebastian Melchior <[email protected]> wrote:\n>> Hi,\n>> \n>> yeah we log those, those times do not match the times of the slowdown at all. Seems to be unrelated.\n>> \n>> Sebastian\n>> \n>> \n>> On 23.03.2012, at 01:47, Stephen Frost wrote:\n>> \n>>> * Sebastian Melchior ([email protected]) wrote:\n>>>> Does anyone have any idea what could cause this issue or how we can further debug it?\n>>> \n>>> Are you logging checkpoints? If not, you should, if so, then see if\n>>> they correllate to the time of the slowdown..?\n>>> \n>>> Thanks,\n>>> \n>>> Stephen\n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> To understand recursion, one must first understand recursion.\n\n",
"msg_date": "Fri, 23 Mar 2012 05:53:52 +0100",
"msg_from": "Sebastian Melchior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "What does vmstat say about things like context switches / interrupts per second?\n\nOn Thu, Mar 22, 2012 at 10:53 PM, Sebastian Melchior <[email protected]> wrote:\n> Hi,\n>\n> we already used iostat and iotop during times of the slowdown, there is no sudden drop in I/O workload in the times of the slowdown. Also the iowait does not spike and stays as before.\n> So i do not think that this is I/O related. As the disks are SSDs there also still is some \"head room\" left.\n>\n> Sebastian\n>\n> On 23.03.2012, at 05:48, Scott Marlowe wrote:\n>\n>> I'd suggest the handy troubleshooting tools sar, iostat, vmstat and iotop\n>>\n>> On Thu, Mar 22, 2012 at 10:37 PM, Sebastian Melchior <[email protected]> wrote:\n>>> Hi,\n>>>\n>>> yeah we log those, those times do not match the times of the slowdown at all. Seems to be unrelated.\n>>>\n>>> Sebastian\n>>>\n>>>\n>>> On 23.03.2012, at 01:47, Stephen Frost wrote:\n>>>\n>>>> * Sebastian Melchior ([email protected]) wrote:\n>>>>> Does anyone have any idea what could cause this issue or how we can further debug it?\n>>>>\n>>>> Are you logging checkpoints? If not, you should, if so, then see if\n>>>> they correllate to the time of the slowdown..?\n>>>>\n>>>> Thanks,\n>>>>\n>>>> Stephen\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>>\n>> --\n>> To understand recursion, one must first understand recursion.\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Thu, 22 Mar 2012 23:19:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "On 2012-03-23 05:53, Sebastian Melchior wrote:\n> Hi,\n>\n> we already used iostat and iotop during times of the slowdown, there is no sudden drop in I/O workload in the times of the slowdown. Also the iowait does not spike and stays as before.\n> So i do not think that this is I/O related. As the disks are SSDs there also still is some \"head room\" left.\n\nI've seen a ssd completely lock up for a dozen seconds or so after \ngiving it a smartctl command to trim a section of the disk. I'm not sure \nif that was the vertex 2 pro disk I was testing or the intel 710, but \nenough reason for us to not mount filesystems with -o discard.\n\nregards,\nYeb\n\n\n\n",
"msg_date": "Fri, 23 Mar 2012 08:10:37 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "Hi,\n\nunfortunately we cannot directly control the TRIM (i am not sure it even occurs) because the SSDs are behind an LSI MegaSAS 9260 Controller which does not allow access via smart. Even if some kind of TRIM command is the problem, shouldn't the iowait go up in this case?\n\nSebastian\n\nOn 23.03.2012, at 08:10, Yeb Havinga wrote:\n\n> On 2012-03-23 05:53, Sebastian Melchior wrote:\n>> Hi,\n>> \n>> we already used iostat and iotop during times of the slowdown, there is no sudden drop in I/O workload in the times of the slowdown. Also the iowait does not spike and stays as before.\n>> So i do not think that this is I/O related. As the disks are SSDs there also still is some \"head room\" left.\n> \n> I've seen a ssd completely lock up for a dozen seconds or so after giving it a smartctl command to trim a section of the disk. I'm not sure if that was the vertex 2 pro disk I was testing or the intel 710, but enough reason for us to not mount filesystems with -o discard.\n> \n> regards,\n> Yeb\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 23 Mar 2012 08:52:10 +0100",
"msg_from": "Sebastian Melchior <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
},
{
"msg_contents": "On Fri, Mar 23, 2012 at 3:52 AM, Sebastian Melchior <[email protected]> wrote:\n> unfortunately we cannot directly control the TRIM (i am not sure it even occurs) because the SSDs are behind an LSI MegaSAS 9260 Controller which does not allow access via smart. Even if some kind of TRIM command is the problem, shouldn't the iowait go up in this case?\n\nBased on my recent benchmarking experiences, maybe not. Suppose\nbackend A takes a lock and then blocks on an I/O. Then, all the other\nbackends block waiting on the lock. So maybe one backend is stuck in\nI/O-wait, but on a multi-processor system the percentages are averaged\nacross all CPUs, so it doesn't really look like there's much I/O-wait.\n If you have 'perf' available, I've found the following quite helpful:\n\nperf record -e cs -g -a sleep 30\nperf report -g\n\nThen you can look at the report and find out what's causing PostgreSQL\nto context-switch out - i.e. block - and therefore find out what lock\nand call path is contended. LWLocks don't show up in pg_locks, so you\ncan't troubleshoot this sort of contention that way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 8 May 2012 12:17:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden Query slowdown on our Postgresql Server"
}
] |
[
{
"msg_contents": "Baron Swartz's recent post [1] on working set size got me to thinking.\nI'm well aware of how I can tell when my database's working set\nexceeds available memory (cache hit rate plummets, performance\ncollapses), but it's less clear how I could predict when this might\noccur.\n\nBaron's proposed method for defining working set size is interesting. Quoth:\n\n> Quantifying the working set size is probably best done as a percentile over time.\n> We can define the 1-hour 99th percentile working set size as the portion of the data\n> to which 99% of the accesses are made over an hour, for example.\n\nI'm not sure whether it would be possible to calculate that today in\nPostgres. Does anyone have any advice?\n\nBest regards,\nPeter\n\n[1]: http://www.fusionio.com/blog/will-fusionio-make-my-database-faster-percona-guest-blog/\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Mon, 26 Mar 2012 00:11:02 -0700",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Determining working set size"
},
{
"msg_contents": "Peter,\n\nCheck out pg_fincore. Still kind of risky on a production server, but does an excellent job of measuring page access on Linux.\n\n----- Original Message -----\n> Baron Swartz's recent post [1] on working set size got me to\n> thinking.\n> I'm well aware of how I can tell when my database's working set\n> exceeds available memory (cache hit rate plummets, performance\n> collapses), but it's less clear how I could predict when this might\n> occur.\n> \n> Baron's proposed method for defining working set size is interesting.\n> Quoth:\n> \n> > Quantifying the working set size is probably best done as a\n> > percentile over time.\n> > We can define the 1-hour 99th percentile working set size as the\n> > portion of the data\n> > to which 99% of the accesses are made over an hour, for example.\n> \n> I'm not sure whether it would be possible to calculate that today in\n> Postgres. Does anyone have any advice?\n> \n> Best regards,\n> Peter\n> \n> [1]:\n> http://www.fusionio.com/blog/will-fusionio-make-my-database-faster-percona-guest-blog/\n> \n> --\n> Peter van Hardenberg\n> San Francisco, California\n> \"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n",
"msg_date": "Tue, 27 Mar 2012 14:58:22 -0500 (CDT)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Determining working set size"
}
] |
[
{
"msg_contents": "Hi all,\n\ntoday I've noticed this link on HN: http://plasma.cs.umass.edu/emery/hoard\n\nSeems like an interesting option for systems with a lot of CPUs that are\ndoing a lot of alloc operations. Right now I don't have a suitable system\nto test it - anyone tried to benchmark it?\n\n\nTomas\n\n",
"msg_date": "Mon, 26 Mar 2012 11:50:00 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "anyone tried to use hoard allocator?"
},
{
"msg_contents": "2012/3/26 Tomas Vondra <[email protected]>:\n> Hi all,\n>\n> today I've noticed this link on HN: http://plasma.cs.umass.edu/emery/hoard\n>\n> Seems like an interesting option for systems with a lot of CPUs that are\n> doing a lot of alloc operations. Right now I don't have a suitable system\n> to test it - anyone tried to benchmark it?\n\nIt has sense for pg? It is not a multithread application.\n\nPavel\n\n>\n>\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Mar 2012 13:09:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: anyone tried to use hoard allocator?"
},
{
"msg_contents": "\nOn Mar 26, 2012, at 2:50 AM, Tomas Vondra wrote:\n\n> Hi all,\n> \n> today I've noticed this link on HN: http://plasma.cs.umass.edu/emery/hoard\n> \n> Seems like an interesting option for systems with a lot of CPUs that are\n> doing a lot of alloc operations. Right now I don't have a suitable system\n> to test it - anyone tried to benchmark it?\n\nIt's just another allocator - not a bad one, but it's been around for years. It's\nmostly aimed at reducing contention in multi-threaded applications, so\nit's not terribly applicable to strictly single-threaded postgresql.\n\nIt's licensing is pretty much incompatible with postgresql too.\n\nCheers,\n Steve\n\n",
"msg_date": "Mon, 26 Mar 2012 09:00:18 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: anyone tried to use hoard allocator?"
}
] |
[
{
"msg_contents": "Have run across some memory behavior on Linux I've never seen before.\n\nServer running RHEL6 with 96GB of RAM. \nKernel 2.6.32\nPostgreSQL 9.0\n208GB database with fairly random accesses over 50% of the database.\n\nNow, here's the weird part: even after a week of uptime, only 21 to 25GB of cache is ever used, and there's constantly 20GB to 35GB free memory. This would mean a small working set, except that we see constant reads from disk (1 to 15MB/s) and around 1/3 of queries are slowed by iowaits.\n\nIn an effort to test this, we deliberately ran a pg_dump. This did grow the cache to all available memory, but Linux rapidly cleared the cache (flushing to disk) down to 25GB within an hour.\n\nsys.kernel.vm parameters are all defaults. None of the parameters seem to specifically relate to the size of the page cache.\n\nHas anyone ever seen this before? What did you do about it?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\nSan Francisco\n",
"msg_date": "Tue, 27 Mar 2012 15:06:17 -0500 (CDT)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux machine aggressively clearing cache"
},
{
"msg_contents": "On Tue, Mar 27, 2012 at 5:06 PM, Joshua Berkus <[email protected]> wrote:\n> In an effort to test this, we deliberately ran a pg_dump. This did grow the cache to all available memory, but Linux rapidly cleared the cache (flushing to disk) down to 25GB within an hour.\n\nThis would happen if some queries (or some program) briefly uses that\nmuch memory (pushing the cache off RAM).\n",
"msg_date": "Tue, 27 Mar 2012 17:10:27 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux machine aggressively clearing cache"
},
{
"msg_contents": "This may just be a typo, but if you really did create write (dirty) block\ndevice cache by writing the pg_dump file somewhere, then that is what it's\nsupposed to do ;) Linux is more aggressive about write cache and will allow\nmore of it to build up than e.g. HP-UX which will start to throttle\nprocess-to-cache writes to avoid getting too far behind.\n\nRead cache of course does not need to be flushed and can simply be dumped\nwhen the memory is needed, and so Linux will keep more or less unlimited\namounts of read cache until it needs the memory for something else ....\nhere is an output from \"free\" on my laptop, showing ~2.5GB of read cache\nthat can be freed almost instantly if needed for process memory, write\ncache, kernel buffers, etc. The -/+ line shows a net of what is being used\nby processes.\n\ndave:~$ free\n total used free shared buffers cached\nMem: 8089056 7476424 612632 0 603508 2556584\n-/+ buffers/cache: 4316332 3772724\nSwap: 24563344 1176284 23387060\n\nredirecting *pg_dump >/dev/null* will read the DB without writing\nanything, but it's pretty resource intensive .... if you just want to get\nthe database tables into the OS read cache you can do it much more cheaply\nwith *sudo tar cvf - /var/lib/postgresql/8.4/main/base | cat >/dev/null *or\nsimilar (GNU tar somehow detects if you connect its stdout directly to\n/dev/null and then it cheats and doesn't do the reads)\n\nIn the second \"free\" output below, the kernel has grabbed what it can for\ncache, leaving only ~64MB of actual free memory for instant use.\n\ndave:~$ pg_dump -F c hyper9db >/dev/null\ndave:~$ free\n total used free shared buffers cached\nMem: 8089056 8024252 64804 0 287432 3797956\n-/+ buffers/cache: 3938864 4150192\nSwap: 24563344 1166556 23396788\ndave:~$\n\nCheers\nDave\n\nOn Tue, Mar 27, 2012 at 3:06 PM, Joshua Berkus <[email protected]> wrote:\n\n> ... but Linux rapidly cleared the cache *(flushing to disk)* down to 25GB\n> within an hour.\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n> San Francisco\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis may just be a typo, but if you really did create write (dirty) block device cache by writing the pg_dump file somewhere, then that is what it's supposed to do ;) Linux is more aggressive about write cache and will allow more of it to build up than e.g. HP-UX which will start to throttle process-to-cache writes to avoid getting too far behind.\nRead cache of course does not need to be flushed and can simply be dumped when the memory is needed, and so Linux will keep more or less unlimited amounts of read cache until it needs the memory for something else .... here is an output from \"free\" on my laptop, showing ~2.5GB of read cache that can be freed almost instantly if needed for process memory, write cache, kernel buffers, etc. The -/+ line shows a net of what is being used by processes.\ndave:~$ free total used free shared buffers cached\nMem: 8089056 7476424 612632 0 603508 2556584-/+ buffers/cache: 4316332 3772724\nSwap: 24563344 1176284 23387060redirecting pg_dump >/dev/null will read the DB without writing anything, but it's pretty resource intensive .... if you just want to get the database tables into the OS read cache you can do it much more cheaply with sudo tar cvf - /var/lib/postgresql/8.4/main/base | cat >/dev/null or similar (GNU tar somehow detects if you connect its stdout directly to /dev/null and then it cheats and doesn't do the reads)\nIn the second \"free\" output below, the kernel has grabbed what it can for cache, leaving only ~64MB of actual free memory for instant use.dave:~$ pg_dump -F c hyper9db >/dev/null\ndave:~$ free total used free shared buffers cached\nMem: 8089056 8024252 64804 0 287432 3797956-/+ buffers/cache: 3938864 4150192\nSwap: 24563344 1166556 23396788dave:~$ Cheers\nDaveOn Tue, Mar 27, 2012 at 3:06 PM, Joshua Berkus <[email protected]> wrote:\n... but Linux rapidly cleared the cache (flushing to disk) down to 25GB within an hour.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\nSan Francisco\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 27 Mar 2012 15:40:32 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux machine aggressively clearing cache"
},
{
"msg_contents": "\n> This may just be a typo, but if you really did create write (dirty)\n> block device cache by writing the pg_dump file somewhere, then that\n> is what it's supposed to do ;) \n\nThe pgdump was across the network. So the only caching on the machine was read caching.\n\n> Read cache of course does not need to be flushed and can simply be\n> dumped when the memory is needed, and so Linux will keep more or\n> less unlimited amounts of read cache until it needs the memory for\n> something else ....\n\nRight, that's the normal behavior. Except not on this machine.\n\n--Josh\n",
"msg_date": "Wed, 28 Mar 2012 13:37:38 -0500 (CDT)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux machine aggressively clearing cache"
},
{
"msg_contents": "\n>> Read cache of course does not need to be flushed and can simply be\n>> dumped when the memory is needed, and so Linux will keep more or\n>> less unlimited amounts of read cache until it needs the memory for\n>> something else ....\n> \n> Right, that's the normal behavior. Except not on this machine.\n\nSo this turned out to be a Linux kernel issue. Will document it on\nwww.databasesoup.com.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 30 Mar 2012 17:51:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux machine aggressively clearing cache"
},
{
"msg_contents": "On 03/30/2012 05:51 PM, Josh Berkus wrote:\n>\n> So this turned out to be a Linux kernel issue. Will document it on\n> www.databasesoup.com.\nAnytime soon? About to build two PostgreSQL servers and wondering if you \nhave uncovered a kernel version or similar issue to avoid.\n\nCheers,\nSteve\n\n",
"msg_date": "Thu, 12 Apr 2012 08:47:58 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux machine aggressively clearing cache"
},
{
"msg_contents": "On 4/12/12 8:47 AM, Steve Crawford wrote:\n> On 03/30/2012 05:51 PM, Josh Berkus wrote:\n>>\n>> So this turned out to be a Linux kernel issue. Will document it on\n>> www.databasesoup.com.\n> Anytime soon? About to build two PostgreSQL servers and wondering if you\n> have uncovered a kernel version or similar issue to avoid.\n\nYeah, I'll blog it.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 18 Apr 2012 17:09:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux machine aggressively clearing cache"
},
{
"msg_contents": "\nOn Wed, Apr 18, 2012 at 05:09:29PM -0700, Josh Berkus wrote:\n> On 4/12/12 8:47 AM, Steve Crawford wrote:\n> > On 03/30/2012 05:51 PM, Josh Berkus wrote:\n> >>\n> >> So this turned out to be a Linux kernel issue. Will document it on\n> >> www.databasesoup.com.\n> > Anytime soon? About to build two PostgreSQL servers and wondering if you\n> > have uncovered a kernel version or similar issue to avoid.\n> \n> Yeah, I'll blog it.\n\nSince I'm doing some backlog catchup, I'll do some community/archive service\nand provide the link:\n\nhttp://www.databasesoup.com/2012/04/red-hat-kernel-cache-clearing-issue.html\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n",
"msg_date": "Thu, 13 Sep 2012 14:00:43 -0500",
"msg_from": "Ross Reedstrom <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux machine aggressively clearing cache"
}
] |
[
{
"msg_contents": "Hi group,\n\nI have the following table with millions of rows\n\nCREATE TABLE table1\n(\n col1 text,\ncol1 text,\n doc text\n)\n\nselect col1 from table1 group by col1 limit 2;\nselect distinct on (col1) col1 from table1 limit 2;\n\n",
"msg_date": "Tue, 27 Mar 2012 13:37:51 -0700 (PDT)",
"msg_from": "Francois Deliege <[email protected]>",
"msg_from_op": true,
"msg_subject": "Distinct + Limit"
},
{
"msg_contents": "Hi list,\n\nI have the following table with millions of rows:\n \nCREATE TABLE table1\n(\n col1 text,\n col2 text,\n col3 text,\n col4 text,\n col5 text,\n col6 text\n)\n\nselect col1 from table1 group by col1 limit 1;\nselect distinct on (col1) col1 from table1 limit 1;\n\nselect col1 from table1 group by col1 limit 2;\nselect distinct on (col1) col1 from table1 limit 2;\n\nPerforming any of these following queries results in a full sequential scan, followed by a hash aggregate, and then the limit. An optimization could be to stop the sequential scan as soon as the limit of results has been reached. Am I missing something?\n\nLimit (cost=2229280.06..2229280.08 rows=2 width=8)\n -> HashAggregate (cost=2229280.06..2229280.21 rows=15 width=8)\n -> Seq Scan on table1 (cost=0.00..2190241.25 rows=15615525 width=8)\n\nSimilarly, the following query results in a sequential scan:\n\nselect * from table1 where col1 <> col1;\n\nThis query is generated by the Sequel library abstraction layer in Ruby when filtering record based on a empty array of values. We fixed this by handling that case on the client side, but originally thought the server would have rewritten it and sent a empty result set.\n\nI would greatly appreciate any help on speeding up these without having to rewrite the queries on the client side.\n\nThanks,\n\nFrancois\n",
"msg_date": "Tue, 27 Mar 2012 13:54:36 -0700 (PDT)",
"msg_from": "Francois Deliege <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Distinct + Limit"
},
{
"msg_contents": "On Tue, Mar 27, 2012 at 11:54 PM, Francois Deliege <[email protected]> wrote:\n> select col1 from table1 group by col1 limit 1;\n> select distinct on (col1) col1 from table1 limit 1;\n>\n> select col1 from table1 group by col1 limit 2;\n> select distinct on (col1) col1 from table1 limit 2;\n>\n> Performing any of these following queries results in a full sequential scan, followed by a hash aggregate, and then the limit. An optimization could be to stop the sequential scan as soon as the limit of results has been reached. Am I missing something?\n\nYes, that would be an optimization. Unfortunately currently the\naggregation logic doesn't have special case logic to start outputting\ntuples immediately when no aggregate functions are in use. In\nprinciple it's possible to teach it to do that, peeking at the code it\nseems that it wouldn't even be too hard to implement.\n\nCurrently your best options are to add an indexes for columns that you\nselect distinct values from, use a server side cursor and do the\ndistinct operation on the client (might need to adjust\ncursor_tuple_fraction before doing the query to make cost estimates\nbetter) or use a stored procedure to do the cursor + manual distinct\ntrick.\n\n> Similarly, the following query results in a sequential scan:\n>\n> select * from table1 where col1 <> col1;\n\nPostgreSQL query optimizer doesn't try to be a theorem prover and so\ndoesn't deduce the logical impossibility. For most queries, looking\nfor nonsensical would be a complete waste of time. The optimizer does\nnotice impossibilities that crop up during constant propagation, so\nWHERE false or WHERE 0 = 1 would work fine. It would be best to fix\nSequel to output literal constant false for PostgreSQL. However, I\nwonder if it's worth checking for this very specific case because it\nis a common idiom for Oracle users to implement constant false in\nwhere predicates due to Oracle not allowing top level literal booleans\nfor some arcane reason or another.\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Wed, 28 Mar 2012 03:18:24 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distinct + Limit"
},
{
"msg_contents": "Francois Deliege <[email protected]> writes:\n> I have the following table with millions of rows:\n \n> CREATE TABLE table1\n> (\n> col1 text,\n> col2 text,\n> col3 text,\n> col4 text,\n> col5 text,\n> col6 text\n> )\n\n> select col1 from table1 group by col1 limit 1;\n> select distinct on (col1) col1 from table1 limit 1;\n\n> select col1 from table1 group by col1 limit 2;\n> select distinct on (col1) col1 from table1 limit 2;\n\n> Performing any of these following queries results in a full sequential\nscan, followed by a hash aggregate, and then the limit.\n\nWell, if you had an index on the column, you would get a significantly\nbetter plan ...\n\n> Similarly, the following query results in a sequential scan:\n\n> select * from table1 where col1 <> col1;\n\n> This query is generated by the Sequel library abstraction layer in Ruby when filtering record based on a empty array of values. We fixed this by handling that case on the client side, but originally thought the server would have rewritten it and sent a empty result set.\n\nIt does not, and never will, because that would be an incorrect\noptimization. \"col1 <> col1\" isn't constant false, it's more like\n\"col1 is not null\". I'd suggest \"WHERE FALSE\", or \"WHERE 1 <> 1\"\nif you must, to generate a provably false constraint.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Mar 2012 10:13:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distinct + Limit "
},
{
"msg_contents": "On Wed, Mar 28, 2012 at 9:13 AM, Tom Lane <[email protected]> wrote:\n> Francois Deliege <[email protected]> writes:\n>> I have the following table with millions of rows:\n>\n>> CREATE TABLE table1\n>> (\n>> col1 text,\n>> col2 text,\n>> col3 text,\n>> col4 text,\n>> col5 text,\n>> col6 text\n>> )\n>\n>> select col1 from table1 group by col1 limit 1;\n>> select distinct on (col1) col1 from table1 limit 1;\n>\n>> select col1 from table1 group by col1 limit 2;\n>> select distinct on (col1) col1 from table1 limit 2;\n>\n>> Performing any of these following queries results in a full sequential\n> scan, followed by a hash aggregate, and then the limit.\n>\n> Well, if you had an index on the column, you would get a significantly\n> better plan ...\n>\n>> Similarly, the following query results in a sequential scan:\n>\n>> select * from table1 where col1 <> col1;\n>\n>> This query is generated by the Sequel library abstraction layer in Ruby when filtering record based on a empty array of values. We fixed this by handling that case on the client side, but originally thought the server would have rewritten it and sent a empty result set.\n>\n> It does not, and never will, because that would be an incorrect\n> optimization. \"col1 <> col1\" isn't constant false, it's more like\n> \"col1 is not null\". I'd suggest \"WHERE FALSE\", or \"WHERE 1 <> 1\"\n> if you must, to generate a provably false constraint.\n\n'col1 is distinct from col1' could be optimized like that. all though\nit would be pretty hard to imagine a use case for it.\n\nmerlin\n",
"msg_date": "Wed, 28 Mar 2012 11:39:57 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distinct + Limit"
}
] |
[
{
"msg_contents": "PostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. The application in question is inserting into a particular table that is not used by the other applications.\n\n\n1) What should I do to confirm that the database is the issue and not the applications?\n\n2) How can I identify where the bottle neck is occurring if the issue happens to be with the database?\n\nI have been using PostgreSQL for eight years. It is an amazing database.\n\nThanks,\n\nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n\n\n\n\n\n\n\n\n\nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. \n The application in question is inserting into a particular table that is not used by the other applications.\n \n1) \nWhat should I do to confirm that the database is the issue and not the applications?\n2) \nHow can I identify where the bottle neck is occurring if the issue happens to be with the database?\n \nI have been using PostgreSQL for eight years. It is an amazing database.\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Thu, 29 Mar 2012 17:59:12 +0000",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "database slowdown while a lot of inserts occur"
},
{
"msg_contents": "I forgot to mention that the slowdown in particular for other applications is when they are trying to insert or update tables unrelated to the application mentioned in my prior application that does the massive small inserts.\n\n\nThanks,\n\nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Campbell, Lance\nSent: Thursday, March 29, 2012 12:59 PM\nTo: [email protected]\nSubject: [PERFORM] database slowdown while a lot of inserts occur\n\nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. The application in question is inserting into a particular table that is not used by the other applications.\n\n\n1) What should I do to confirm that the database is the issue and not the applications?\n\n2) How can I identify where the bottle neck is occurring if the issue happens to be with the database?\n\nI have been using PostgreSQL for eight years. It is an amazing database.\n\nThanks,\n\nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n\n\n\n\n\n\n\n\n\nI forgot to mention that the slowdown in particular for other applications is when they are trying to insert or update tables unrelated to the application mentioned in my prior application that does the massive\n small inserts.\n \n\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Campbell, Lance\nSent: Thursday, March 29, 2012 12:59 PM\nTo: [email protected]\nSubject: [PERFORM] database slowdown while a lot of inserts occur\n\n\n \nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. \n The application in question is inserting into a particular table that is not used by the other applications.\n \n1) \nWhat should I do to confirm that the database is the issue and not the applications?\n2) \nHow can I identify where the bottle neck is occurring if the issue happens to be with the database?\n \nI have been using PostgreSQL for eight years. It is an amazing database.\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Thu, 29 Mar 2012 18:02:47 +0000",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "can you post all the configuration parameters related to the I/O activity?\nplus, could you post some stats from 'iostat' when this is happening?\nthx\n\nOn Thu, Mar 29, 2012 at 9:02 PM, Campbell, Lance <[email protected]> wrote:\n\n> I forgot to mention that the slowdown in particular for other\n> applications is when they are trying to insert or update tables unrelated\n> to the application mentioned in my prior application that does the massive\n> small inserts.****\n>\n> ** **\n>\n> ** **\n>\n> Thanks,****\n>\n> ** **\n>\n> Lance Campbell****\n>\n> Software Architect****\n>\n> Web Services at Public Affairs****\n>\n> 217-333-0382****\n>\n> ** **\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Campbell, Lance\n> *Sent:* Thursday, March 29, 2012 12:59 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] database slowdown while a lot of inserts occur****\n>\n> ** **\n>\n> PostgreSQL 9.0.x****\n>\n> We have around ten different applications that use the same database.\n> When one particular application is active it does an enormous number of\n> inserts. Each insert is very small. During this time the database seems\n> to slow down in general. The application in question is inserting into a\n> particular table that is not used by the other applications.****\n>\n> ** **\n>\n> **1) **What should I do to confirm that the database is the issue\n> and not the applications?****\n>\n> **2) **How can I identify where the bottle neck is occurring if the\n> issue happens to be with the database?****\n>\n> ** **\n>\n> I have been using PostgreSQL for eight years. It is an amazing database.*\n> ***\n>\n> ** **\n>\n> Thanks,****\n>\n> ** **\n>\n> Lance Campbell****\n>\n> Software Architect****\n>\n> Web Services at Public Affairs****\n>\n> 217-333-0382****\n>\n> ** **\n>\n\ncan you post all the configuration parameters related to the I/O activity?plus, could you post some stats from 'iostat' when this is happening?thxOn Thu, Mar 29, 2012 at 9:02 PM, Campbell, Lance <[email protected]> wrote:\n\n\n\nI forgot to mention that the slowdown in particular for other applications is when they are trying to insert or update tables unrelated to the application mentioned in my prior application that does the massive\n small inserts.\n \n\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Campbell, Lance\nSent: Thursday, March 29, 2012 12:59 PM\nTo: [email protected]\nSubject: [PERFORM] database slowdown while a lot of inserts occur\n\n\n \nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. \n The application in question is inserting into a particular table that is not used by the other applications.\n \n1) \nWhat should I do to confirm that the database is the issue and not the applications?\n2) \nHow can I identify where the bottle neck is occurring if the issue happens to be with the database?\n \nI have been using PostgreSQL for eight years. It is an amazing database.\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Thu, 29 Mar 2012 21:14:05 +0300",
"msg_from": "Filippos Kalamidas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "On a Linux system you can use tools like \"sar\" and \"iostat\" to watch\ndisk activity and view the writes/second or I am sure there are other\ntools you can use. Watch CPU and memory with \"top\" If it does appear\nto be an I/O issue there are\nsome things you can do in either hardware or software, or if it is a\nCPU/ memory issue building indexes or running updates on triggers\n\nA simple suggestion is:\n Move the bulk insert application to run during 'off' or 'slow'\nhours if possible.\n\nSome Software suggestions are:\n Use the PG \"Copy\" to do the bulk insert\nhttp://www.postgresql.org/docs/9.0/static/sql-copy.html\n (or)\n Drop the indexes (or triggers), do the inserts and build indexes\nand triggers.\n\nSome Hardware suggestions are dependendent on if it is I/O, CPU, or\nmemory bottleneck.\n\nDeron\n\n\n\nOn Thu, Mar 29, 2012 at 11:59 AM, Campbell, Lance <[email protected]> wrote:\n> PostgreSQL 9.0.x\n>\n> We have around ten different applications that use the same database. When\n> one particular application is active it does an enormous number of inserts.\n> Each insert is very small. During this time the database seems to slow down\n> in general. The application in question is inserting into a particular\n> table that is not used by the other applications.\n>\n>\n>\n> 1) What should I do to confirm that the database is the issue and not\n> the applications?\n>\n> 2) How can I identify where the bottle neck is occurring if the issue\n> happens to be with the database?\n>\n>\n>\n> I have been using PostgreSQL for eight years. It is an amazing database.\n>\n>\n>\n> Thanks,\n>\n>\n>\n> Lance Campbell\n>\n> Software Architect\n>\n> Web Services at Public Affairs\n>\n> 217-333-0382\n>\n>\n",
"msg_date": "Thu, 29 Mar 2012 12:17:24 -0600",
"msg_from": "Deron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "Lance,\n\nMay small inserts cause frequent fsyncs. Is there any way those small inserts can be batched into some larger sets of inserts that use copy to perform the load?\n\nBob Lunney\n\n\n________________________________\n From: \"Campbell, Lance\" <[email protected]>\nTo: \"Campbell, Lance\" <[email protected]>; \"[email protected]\" <[email protected]> \nSent: Thursday, March 29, 2012 1:02 PM\nSubject: Re: [PERFORM] database slowdown while a lot of inserts occur\n \n\n \nI forgot to mention that the slowdown in particular for other applications is when they are trying to insert or update tables unrelated to the application mentioned in my prior application that does the massive small inserts.\n \n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n \nFrom:[email protected] [mailto:[email protected]] On Behalf Of Campbell, Lance\nSent: Thursday, March 29, 2012 12:59 PM\nTo: [email protected]\nSubject: [PERFORM] database slowdown while a lot of inserts occur\n \nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. The application in question is inserting into a particular table that is not used by the other applications.\n \n1) What should I do to confirm that the database is the issue and not the applications?\n2) How can I identify where the bottle neck is occurring if the issue happens to be with the database?\n \nI have been using PostgreSQL for eight years. It is an amazing database.\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\nLance,May small inserts cause frequent fsyncs. Is there any way those small inserts can be batched into some larger sets of inserts that use copy to perform the load?Bob Lunney From: \"Campbell, Lance\" <[email protected]> To: \"Campbell, Lance\" <[email protected]>; \"[email protected]\" <[email protected]> Sent: Thursday, March 29, 2012 1:02 PM Subject: Re: [PERFORM] database slowdown while a lot of inserts occur \n\n\n\n\nI forgot to mention that the slowdown in particular for other applications is when they are trying to insert or update tables unrelated to the application mentioned in my prior application that does the massive\n small inserts.\n \n\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Campbell, Lance\nSent: Thursday, March 29, 2012 12:59 PM\nTo: [email protected]\nSubject: [PERFORM] database slowdown while a lot of inserts occur\n\n\n \nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. \n The application in question is inserting into a particular table that is not used by the other applications.\n \n1) \nWhat should I do to confirm that the database is the issue and not the applications?\n2) \nHow can I identify where the bottle neck is occurring if the issue happens to be with the database?\n \nI have been using PostgreSQL for eight years. It is an amazing database.\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Thu, 29 Mar 2012 12:27:08 -0700 (PDT)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "\n\nOn 03/29/2012 03:27 PM, Bob Lunney wrote:\n> Lance,\n>\n> May small inserts cause frequent fsyncs. Is there any way those small \n> inserts can be batched into some larger sets of inserts that use copy \n> to perform the load?\n\n\n\nOr possibly a prepared statement called many times in a single \ntransaction, if you're not using that already. It's not as efficient as \nCOPY, but it's often a much less extensive change to the code.\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Thu, 29 Mar 2012 15:39:36 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "Lance,\n\nHave faced the same issue with thousands of small inserts (actually they were inserts/updates) causing the database to slowdown. You have received good suggestions from the list, but listing them as points will make the issue clearer:\n\n1) Disk configuration: RAID 5 was killing the performance after the database grew beyond 100 GB. Getting a RAID 10 with 12 spindles made a world of difference in my case. You can use iostat as Deron has suggested below to get information of latency which should help you find if disks are a bottleneck. Unless server RAM is very small and it also doubles up as application server or has other processes running, the RAM should not be a bottleneck.\n\nAlso have separate logging and data disks, which has been suggested in many posts in past.\n\n2) Invoking Batch mode in program: In JDBC, there is a batch insert mode. Invoking the batch mode for a set of records has increased the efficiency of inserts in my case. It would be safe to suggest that use of batch mode in programming language you have used will give improved speeds.\n\n3) Dropping indexes/ triggers: This will not work if the application has multiple instances running at same time OR if the insert is actually an insert/update.\n\n4) You should think of using COPY command since you have mentioned that the table is NOT used by other applications, but caveat of multiple instances mentioned above will still hold true.\n\n5) Enabling autovacuum and autoanalyse : A must. Infact you should force a vacuum and analyze if the insert batch is large.\n\n\nHTH,\n\nShrirang Chitnis\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee. The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited. If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Deron\nSent: Thursday, March 29, 2012 11:47 PM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] database slowdown while a lot of inserts occur\n\nOn a Linux system you can use tools like \"sar\" and \"iostat\" to watch\ndisk activity and view the writes/second or I am sure there are other\ntools you can use. Watch CPU and memory with \"top\" If it does appear\nto be an I/O issue there are\nsome things you can do in either hardware or software, or if it is a\nCPU/ memory issue building indexes or running updates on triggers\n\nA simple suggestion is:\n Move the bulk insert application to run during 'off' or 'slow'\nhours if possible.\n\nSome Software suggestions are:\n Use the PG \"Copy\" to do the bulk insert\nhttp://www.postgresql.org/docs/9.0/static/sql-copy.html\n (or)\n Drop the indexes (or triggers), do the inserts and build indexes\nand triggers.\n\nSome Hardware suggestions are dependendent on if it is I/O, CPU, or\nmemory bottleneck.\n\nDeron\n\n\n\nOn Thu, Mar 29, 2012 at 11:59 AM, Campbell, Lance <[email protected]> wrote:\n> PostgreSQL 9.0.x\n>\n> We have around ten different applications that use the same database. When\n> one particular application is active it does an enormous number of inserts.\n> Each insert is very small. During this time the database seems to slow down\n> in general. The application in question is inserting into a particular\n> table that is not used by the other applications.\n>\n>\n>\n> 1) What should I do to confirm that the database is the issue and not\n> the applications?\n>\n> 2) How can I identify where the bottle neck is occurring if the issue\n> happens to be with the database?\n>\n>\n>\n> I have been using PostgreSQL for eight years. It is an amazing database.\n>\n>\n>\n> Thanks,\n>\n>\n>\n> Lance Campbell\n>\n> Software Architect\n>\n> Web Services at Public Affairs\n>\n> 217-333-0382\n>\n>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 29 Mar 2012 16:17:38 -0400",
"msg_from": "Shrirang Chitnis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "On Thu, Mar 29, 2012 at 12:02 PM, Campbell, Lance <[email protected]> wrote:\n> I forgot to mention that the slowdown in particular for other applications\n> is when they are trying to insert or update tables unrelated to the\n> application mentioned in my prior application that does the massive small\n> inserts.\n\nIt sounds like you're just flooding your IO channels. Can you\nthrottle your write rate in your main application that's inserting so\nmany inserts? Something that sleeps 10ms between every hundred rows\nor some such?\n",
"msg_date": "Thu, 29 Mar 2012 15:36:46 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "On 29.3.2012 21:27, Bob Lunney wrote:\n> Lance,\n> \n> May small inserts cause frequent fsyncs. Is there any way those small\n> inserts can be batched into some larger sets of inserts that use copy to\n> perform the load?\n\nNot necessarily - fsync happens at COMMIT time, not when the INSERT is\nperformed (unless each INSERT stands on it's own).\n\nTomas\n",
"msg_date": "Sat, 31 Mar 2012 03:11:08 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "Hi,\n\nOn 29.3.2012 19:59, Campbell, Lance wrote:\n> PostgreSQL 9.0.x\n> \n> We have around ten different applications that use the same database. \n> When one particular application is active it does an enormous number of\n> inserts. Each insert is very small. During this time the database\n> seems to slow down in general. The application in question is inserting\n> into a particular table that is not used by the other applications.\n\nCan you provide more info? Show us some vmstat / 'iostat -x' logs so\nthat we can see what kind of bottleneck are you hitting. Provide more\ndetails about your system (especially I/O subsystem - what drives, what\nRAID config etc.)\n\nAlso, we need more details about the workload. Is each INSERT a separate\ntransaction or are they grouped into transactions fo multiple INSERTs?\n\n> 1) What should I do to confirm that the database is the issue and\n> not the applications?\n\nWell, usually the application is the culprit. Some applications are\ndesigned so that it's almost certain there was a 'let's poke the\ndatabase as hard as possible' goal at the beginning. Not sure it's this\ncase, though.\n\nMight be a misconfigured database too - what are the basic parameters\n(shared buffers, ...)?\n\n\n> 2) How can I identify where the bottle neck is occurring if the\n> issue happens to be with the database?\n\nWatching 'iostat -x' or 'top' will usually point you the right\ndirection. Is the CPU fully utilized => you're doing something that\nneeds more CPU time than you have? Is the I/O wait high (say above 50%)?\nWell, you have issues with I/O bottlenecks (either random or\nsequential). The less visible bottlenecks are usually related to memory,\nbus bandwidth etc.\n\nTomas\n",
"msg_date": "Sat, 31 Mar 2012 03:21:33 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "Tomas,\n\nYou are correct. I was assuming that each insert was issued as an implicit transaction, without the benefit of an explicit BEGIN/COMMIT batching many of them together, as I've seen countless times in tight loops trying to pose as a batch insert.\n\nBob\n\n\n\n________________________________\n From: Tomas Vondra <[email protected]>\nTo: [email protected] \nSent: Friday, March 30, 2012 8:11 PM\nSubject: Re: [PERFORM] database slowdown while a lot of inserts occur\n \nOn 29.3.2012 21:27, Bob Lunney wrote:\n> Lance,\n> \n> May small inserts cause frequent fsyncs. Is there any way those small\n> inserts can be batched into some larger sets of inserts that use copy to\n> perform the load?\n\nNot necessarily - fsync happens at COMMIT time, not when the INSERT is\nperformed (unless each INSERT stands on it's own).\n\nTomas\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nTomas,You are correct. I was assuming that each insert was issued as an implicit transaction, without the benefit of an explicit BEGIN/COMMIT batching many of them together, as I've seen countless times in tight loops trying to pose as a batch insert.Bob From: Tomas Vondra <[email protected]> To:\n [email protected] Sent: Friday, March 30, 2012 8:11 PM Subject: Re: [PERFORM] database slowdown while a lot of inserts occur \nOn 29.3.2012 21:27, Bob Lunney wrote:> Lance,> > May small inserts cause frequent fsyncs. Is there any way those small> inserts can be batched into some larger sets of inserts that use copy to> perform the load?Not necessarily - fsync happens at COMMIT time, not when the INSERT isperformed (unless each INSERT stands on it's own).Tomas-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 31 Mar 2012 19:20:03 -0700 (PDT)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
},
{
"msg_contents": "Few words regarding small inserts and a lot of fsyncs:\nIf it is your problem, you can fix this by using battery-backed raid card.\nSimilar effect can be reached by turning synchronious commit off. Note\nthat the latter may make few last commits lost in case of sudden reboot.\nBut you can at least test if moving to BBU will help you. (Dunno if this\nsetting can be changed with SIGHUP without restart).\nNote that this may still be a lot of random writes. And in case of RAID5 -\na lot of random reads too. I don't think batching will help other\napplications. This is the tool to help application that uses batching. If\nyou have random writes, look at HOT updates - they may help you if you will\nfollow requirements.\nCheck your checkpoints - application writes to commit log first (sequential\nwrite), then during checkpoints data is written to tables (random writes) -\nlonger checkpoints may make you life easier. Try to increase\ncheckpoint_segments.\nIf you have alot of data written - try to move you commit logs to another\ndrive/partition.\nIf you have good raid card with memory and BBU, you may try to disable read\ncache on it (leaving only write cache). Read cache is usually good at OS\nlevel (with much more memory) and fast writes need BBU-protected write\ncache.\n\nBest regards, Vitalii Tymchyshyn\n\n2012/3/29 Campbell, Lance <[email protected]>\n\n> PostgreSQL 9.0.x****\n>\n> We have around ten different applications that use the same database.\n> When one particular application is active it does an enormous number of\n> inserts. Each insert is very small. During this time the database seems\n> to slow down in general. The application in question is inserting into a\n> particular table that is not used by the other applications.****\n>\n> ** **\n>\n> **1) **What should I do to confirm that the database is the issue\n> and not the applications?****\n>\n> **2) **How can I identify where the bottle neck is occurring if the\n> issue happens to be with the database?****\n>\n> ** **\n>\n> I have been using PostgreSQL for eight years. It is an amazing database.*\n> ***\n>\n> ** **\n>\n> Thanks,****\n>\n> ** **\n>\n> Lance Campbell****\n>\n> Software Architect****\n>\n> Web Services at Public Affairs****\n>\n> 217-333-0382****\n>\n> ** **\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nFew words regarding small inserts and a lot of fsyncs:If it is your problem, you can fix this by using battery-backed raid card. Similar effect can be reached by turning synchronious commit off. Note that the latter may make few last commits lost in case of sudden reboot. But you can at least test if moving to BBU will help you. (Dunno if this setting can be changed with SIGHUP without restart).\nNote that this may still be a lot of random writes. And in case of RAID5 - a lot of random reads too. I don't think batching will help other applications. This is the tool to help application that uses batching. If you have random writes, look at HOT updates - they may help you if you will follow requirements. \nCheck your checkpoints - application writes to commit log first (sequential write), then during checkpoints data is written to tables (random writes) - longer checkpoints may make you life easier. Try to increase checkpoint_segments.\nIf you have alot of data written - try to move you commit logs to another drive/partition.If you have good raid card with memory and BBU, you may try to disable read cache on it (leaving only write cache). Read cache is usually good at OS level (with much more memory) and fast writes need BBU-protected write cache.\nBest regards, Vitalii Tymchyshyn2012/3/29 Campbell, Lance <[email protected]>\n\n\nPostgreSQL 9.0.x\nWe have around ten different applications that use the same database. When one particular application is active it does an enormous number of inserts. Each insert is very small. During this time the database seems to slow down in general. \n The application in question is inserting into a particular table that is not used by the other applications.\n \n1) \nWhat should I do to confirm that the database is the issue and not the applications?\n2) \nHow can I identify where the bottle neck is occurring if the issue happens to be with the database?\n \nI have been using PostgreSQL for eight years. It is an amazing database.\n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n \n\n\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Mon, 2 Apr 2012 10:14:01 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database slowdown while a lot of inserts occur"
}
] |
[
{
"msg_contents": "PostgreSQL 9.0.x\nWhen PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand the value if you were to have the tablespaces on different raid arrays. But what about on the same one?\n\n\nThanks,\n\nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n\n\n\n\n\n\n\n\n\nPostgreSQL 9.0.x\nWhen PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand the value if\n you were to have the tablespaces on different raid arrays. But what about on the same one?\n \n \nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Fri, 30 Mar 2012 14:45:36 +0000",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tablespaces on a raid configuration"
},
{
"msg_contents": "On Fri, Mar 30, 2012 at 02:45:36PM +0000, Campbell, Lance wrote:\n> PostgreSQL 9.0.x\n> When PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand the value if you were to have the tablespaces on different raid arrays. But what about on the same one?\n> \n> \n> Thanks,\n> \n> Lance Campbell\n> Software Architect\n> Web Services at Public Affairs\n> 217-333-0382\n> \n\nI have seen previous discussions about using different filesystems versus\na single filesystem and one advantage that multiple tablespaces have is\nthat an fsync on one table/tablespace would not block or be blocked by\nan fsync on a different table/tablespace at the OS level.\n\nRegards,\nKen\n",
"msg_date": "Fri, 30 Mar 2012 09:53:48 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
},
{
"msg_contents": "On Fri, Mar 30, 2012 at 8:45 AM, Campbell, Lance <[email protected]> wrote:\n\n> PostgreSQL 9.0.x****\n>\n> When PostgreSQL storage is using a relatively large raid 5 or 6 array is\n> there any value in having your tables distributed across multiple\n> tablespaces if those tablespaces will exists on the same raid array? I\n> understand the value if you were to have the tablespaces on different raid\n> arrays. But what about on the same one?****\n>\n>\n>\nOur application is a combination of OLTP and OLAP. We've successfully\nsplit the database into 3 different tablespaces.\n\n 1. RAID Group A is RAID 10 contains /var/lib/pgsql/data and the OLTP\ndatabase (default tablespace)\n 2. RAID Group B is a RAID 10 for the indexes on the data warehouse (index\ntablespace)\n 3. RAID Group C is a RAID 5 containing the actual data warehouse (data\ntablespace)\n\nA more optimum configuration would include another RAID 10 for the indexes\nfor the OLTP but we ran out of drives to create a RAID Group D and the\nabove configuration works well enough.\n\nBefore going with RAID 5, please review http://www.baarf.com/.\n\n-Greg\n\nOn Fri, Mar 30, 2012 at 8:45 AM, Campbell, Lance <[email protected]> wrote:\n\n\nPostgreSQL 9.0.x\nWhen PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand the value if\n you were to have the tablespaces on different raid arrays. But what about on the same one?\nOur application is a combination of OLTP and OLAP. We've successfully split the database into 3 different tablespaces.\n 1. RAID Group A is RAID 10 contains /var/lib/pgsql/data and the OLTP database (default tablespace) 2. RAID Group B is a RAID 10 for the indexes on the data warehouse (index tablespace) 3. RAID Group C is a RAID 5 containing the actual data warehouse (data tablespace)\nA more optimum configuration would include another RAID 10 for the indexes for the OLTP but we ran out of drives to create a RAID Group D and the above configuration works well enough.\nBefore going with RAID 5, please review http://www.baarf.com/.-Greg",
"msg_date": "Fri, 30 Mar 2012 09:02:21 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
},
{
"msg_contents": "\n\nOn 03/30/2012 10:45 AM, Campbell, Lance wrote:\n>\n> PostgreSQL 9.0.x\n>\n> When PostgreSQL storage is using a relatively large raid 5 or 6 \n> array is there any value in having your tables distributed across \n> multiple tablespaces if those tablespaces will exists on the same raid \n> array? I understand the value if you were to have the tablespaces on \n> different raid arrays. But what about on the same one?\n>\n>\n\nNot answering your question, but standard advice is not to use RAID 5 or \n6, but RAID 10 for databases. Not sure if that still hold if you're \nusing SSDs.\n\ncheers\n\nandrew\n",
"msg_date": "Fri, 30 Mar 2012 11:02:44 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
},
{
"msg_contents": "On Fri, Mar 30, 2012 at 10:02 AM, Andrew Dunstan <[email protected]> wrote:\n> Not answering your question, but standard advice is not to use RAID 5 or 6,\n> but RAID 10 for databases. Not sure if that still hold if you're using SSDs.\n\nYeah, for SSD the equations may change. Parity based RAID has two\nproblems: performance due to writes having to do a read before writing\nin order to calculate parity and safety (especially for raid 5) since\nyou are at greater risk of having a second drive pop while you're\nrebuilding your volume. In both things the SSD might significantly\nreduce the negative impacts: read and write performance are highly\nasymmetric greatly reducing or even eliminating observed cost of the\n'write hole'. Also, huge sequential speeds and generally smaller\ndevice sizes mean very rapid rebuild time. Also, higher cost/gb can\nplay in. Food for thought.\n\nmerlin\n",
"msg_date": "Fri, 30 Mar 2012 11:11:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
},
{
"msg_contents": "On Fri, Mar 30, 2012 at 7:53 AM, [email protected] <[email protected]> wrote:\n> On Fri, Mar 30, 2012 at 02:45:36PM +0000, Campbell, Lance wrote:\n>> PostgreSQL 9.0.x\n>> When PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand the value if you were to have the tablespaces on different raid arrays. But what about on the same one?\n>>\n>>\n>> Thanks,\n>>\n>> Lance Campbell\n>> Software Architect\n>> Web Services at Public Affairs\n>> 217-333-0382\n>>\n>\n> I have seen previous discussions about using different filesystems versus\n> a single filesystem and one advantage that multiple tablespaces have is\n> that an fsync on one table/tablespace would not block or be blocked by\n> an fsync on a different table/tablespace at the OS level.\n\nAnother advantage is that you can use a non-journaling FS for the WAL\n(ext2) and a journaling FS for the data (ext4 etc.). I was told that\nthere's no reason to use a journaling fs for the WAL since the WAL is\na journal.\n\nCraig\n",
"msg_date": "Fri, 30 Mar 2012 10:30:41 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
},
{
"msg_contents": "On 30.3.2012 16:53, [email protected] wrote:\n> On Fri, Mar 30, 2012 at 02:45:36PM +0000, Campbell, Lance wrote:\n>> PostgreSQL 9.0.x\n>> When PostgreSQL storage is using a relatively large raid 5 or 6 array is there any value in having your tables distributed across multiple tablespaces if those tablespaces will exists on the same raid array? I understand the value if you were to have the tablespaces on different raid arrays. But what about on the same one?\n>>\n>>\n>> Thanks,\n>>\n>> Lance Campbell\n>> Software Architect\n>> Web Services at Public Affairs\n>> 217-333-0382\n>>\n> \n> I have seen previous discussions about using different filesystems versus\n> a single filesystem and one advantage that multiple tablespaces have is\n> that an fsync on one table/tablespace would not block or be blocked by\n> an fsync on a different table/tablespace at the OS level.\n\nNo. What matters is a physical device. If you have a drive that can do\njust 120 seeks/fsyncs per second (or more, depends on the speed), then\neven if you divide that into multiple filesystems you're still stuck\nwith the total of 120 seeks. I.e. splitting that into 10 partitions\nwon't give you 1200 seeks ...\n\nOP mentions he's using RAID-5 or 6 - that's pretty bad because it\neffectively creates one huge device. Splitting this into filesystem will\nbehave rather bad, because all the drives are rather tightly coupled\nbecause of to the parity.\n\nIf you can create the filesystems on different devices, then you're\ngolden and this can really help.\n\nAnd it's not just about fsync operations - WAL is written in sequential\nmanner. By placing it on the same device as data files you're\neffectively forcing it to be written randomly, because the the database\nhas to write a WAL record, seeks somewhere else to read something, etc.\n\nTomas\n",
"msg_date": "Sat, 31 Mar 2012 03:32:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
},
{
"msg_contents": "On Fri, Mar 30, 2012 at 9:32 PM, Tomas Vondra <[email protected]> wrote:\n\n> And it's not just about fsync operations - WAL is written in sequential\n> manner. By placing it on the same device as data files you're\n> effectively forcing it to be written randomly, because the the database\n> has to write a WAL record, seeks somewhere else to read something, etc.\n\nOr, if you put WAL on a journalled FS, even if it's on dedicated spindles ;-)\n\na.\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Sat, 31 Mar 2012 09:42:49 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces on a raid configuration"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe are running performance tests using PG 8.3 on a Windows 2008 R2 machine connecting locally over TCP.\nIn our tests, we have found that it takes ~3ms to update a table with ~25 columns and 60K records, with one column indexed.\nWe have reached this number after many tweaks of the database configuraiton and one of the changes we made was to perform the updates in batches of 5K as opposed to the pervious transaction per event. Note that our use of batches is to have only one transaction, but still each of the 5K events is independently SELECTing and UPDATEing records, i.e. it is not all contained in a stored procedure or such.\n\nStill these times are too high for us and we are looking to lower them and I am wondering about the TCP/IP overhead of passing the information back and forth. Does anyone have any numbers in what the TCP could cost in the configuration mentioned above or pointers on how to test it?\n\n\nMany thanks,\nOfer\n\n\n\n\n\nHi \nall,\n \nWe are running \nperformance tests using PG 8.3 on a Windows 2008 R2 machine connecting locally \nover TCP.\nIn our tests, we \nhave found that it takes ~3ms to update a table with ~25 columns and 60K \nrecords, with one column indexed.\nWe have reached this \nnumber after many tweaks of the database configuraiton and one of the changes we \nmade was to perform the updates in batches of 5K as opposed to the pervious \ntransaction per event. Note that our use of batches is to have only one \ntransaction, but still each of the 5K events is independently SELECTing and \nUPDATEing records, i.e. it is not all contained in a stored procedure or \nsuch.\n \nStill these times \nare too high for us and we are looking to lower them and I am wondering about \nthe TCP/IP overhead of passing the information back and forth. Does anyone \nhave any numbers in what the TCP could cost in the configuration mentioned above \nor pointers on how to test it?\n \n \nMany \nthanks,\nOfer",
"msg_date": "Sun, 1 Apr 2012 23:24:43 +0300",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "TCP Overhead on Local Loopback"
},
{
"msg_contents": "You could try using Unix domain socket and see if the performance improves. A relevant link:\n\nhttp://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets\n\n\n\n________________________________\n From: Ofer Israeli <[email protected]>\nTo: \"[email protected]\" <[email protected]> \nSent: Sunday, April 1, 2012 4:24 PM\nSubject: [PERFORM] TCP Overhead on Local Loopback\n \n\nHi \nall,\n \nWe are running \nperformance tests using PG 8.3 on a Windows 2008 R2 machine connecting locally \nover TCP.\nIn our tests, we \nhave found that it takes ~3ms to update a table with ~25 columns and 60K \nrecords, with one column indexed.\nWe have reached this \nnumber after many tweaks of the database configuraiton and one of the changes we \nmade was to perform the updates in batches of 5K as opposed to the pervious \ntransaction per event. Note that our use of batches is to have only one \ntransaction, but still each of the 5K events is independently SELECTing and \nUPDATEing records, i.e. it is not all contained in a stored procedure or \nsuch.\n \nStill these times \nare too high for us and we are looking to lower them and I am wondering about \nthe TCP/IP overhead of passing the information back and forth. Does anyone \nhave any numbers in what the TCP could cost in the configuration mentioned above \nor pointers on how to test it?\n \n \nMany \nthanks,\nOfer\nYou could try using Unix domain socket and see if the performance improves. A relevant link:http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets From: Ofer Israeli <[email protected]> To:\n \"[email protected]\" <[email protected]> Sent: Sunday, April 1, 2012 4:24 PM Subject: [PERFORM] TCP Overhead on Local Loopback \n\n\nHi \nall,\n \nWe are running \nperformance tests using PG 8.3 on a Windows 2008 R2 machine connecting locally \nover TCP.\nIn our tests, we \nhave found that it takes ~3ms to update a table with ~25 columns and 60K \nrecords, with one column indexed.\nWe have reached this \nnumber after many tweaks of the database configuraiton and one of the changes we \nmade was to perform the updates in batches of 5K as opposed to the pervious \ntransaction per event. Note that our use of batches is to have only one \ntransaction, but still each of the 5K events is independently SELECTing and \nUPDATEing records, i.e. it is not all contained in a stored procedure or \nsuch.\n \nStill these times \nare too high for us and we are looking to lower them and I am wondering about \nthe TCP/IP overhead of passing the information back and forth. Does anyone \nhave any numbers in what the TCP could cost in the configuration mentioned above \nor pointers on how to test it?\n \n \nMany \nthanks,\nOfer",
"msg_date": "Sun, 1 Apr 2012 15:01:49 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Sun, Apr 1, 2012 at 1:24 PM, Ofer Israeli <[email protected]> wrote:\n> Hi all,\n>\n> We are running performance tests using PG 8.3 on a Windows 2008 R2 machine\n> connecting locally over TCP.\n\n8.3 will be not supported in under a year. Time to start testing upgrades.\n\nhttp://www.postgresql.org/support/versioning/\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Sun, 1 Apr 2012 16:48:10 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "\n\nOn 04/01/2012 06:01 PM, Andy wrote:\n> You could try using Unix domain socket and see if the performance \n> improves. A relevant link:\n\nHe said Windows. There are no Unix domain sockets on Windows. (And \nplease don't top-post)\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Sun, 01 Apr 2012 19:54:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan <[email protected]> wrote:\n>> You could try using Unix domain socket and see if the performance\n>> improves. A relevant link:\n>\n>\n> He said Windows. There are no Unix domain sockets on Windows. (And please\n> don't top-post)\n\nWindows supports named pipes, which are functionally similar, but I\ndon't think pg supports them.\n",
"msg_date": "Sun, 1 Apr 2012 21:29:35 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "\n\nOn 04/01/2012 08:29 PM, Claudio Freire wrote:\n> On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan<[email protected]> wrote:\n>>> You could try using Unix domain socket and see if the performance\n>>> improves. A relevant link:\n>>\n>> He said Windows. There are no Unix domain sockets on Windows. (And please\n>> don't top-post)\n> Windows supports named pipes, which are functionally similar, but I\n> don't think pg supports them.\n>\n\nCorrect, so telling the OP to have a look at them isn't at all helpful. \nAnd they are not supported on all Windows platforms we support either \n(specifically not on XP, AIUI).\n\ncheers\n\nandrew\n",
"msg_date": "Sun, 01 Apr 2012 21:11:03 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Sun, Apr 1, 2012 at 6:11 PM, Andrew Dunstan <[email protected]> wrote:\n\n>\n>\n> On 04/01/2012 08:29 PM, Claudio Freire wrote:\n>\n>> On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan<[email protected]>\n>> wrote:\n>>\n>>> You could try using Unix domain socket and see if the performance\n>>>> improves. A relevant link:\n>>>>\n>>>\n>>> He said Windows. There are no Unix domain sockets on Windows. (And please\n>>> don't top-post)\n>>>\n>> Windows supports named pipes, which are functionally similar, but I\n>> don't think pg supports them.\n>>\n>>\n> Correct, so telling the OP to have a look at them isn't at all helpful.\n> And they are not supported on all Windows platforms we support either\n> (specifically not on XP, AIUI).\n>\n\nBut suggesting moving away from TCP/IP with no actual evidence that it is\nnetwork overhead that is the problem is a little premature, regardless.\n What, exactly, are the set of operations that each update is performing\nand is there any way to batch them into fewer statements within the\ntransaction. For example, could you insert all 60,000 records into a\ntemporary table via COPY, then run just a couple of queries to do bulk\ninserts and bulk updates into the destination tble via joins to the temp\ntable? 60,000 rows updated with 25 columns, 1 indexed in 3ms is not\nexactly slow. That's a not insignificant quantity of data which must be\ntransferred from client to server, parsed, and then written to disk,\nregardless of TCP overhead. That is happening via at least 60,000\nindividual SQL statements that are not even prepared statements. I don't\nimagine that TCP overhead is really the problem here. Regardless, you can\nreduce both statement parse time and TCP overhead by doing bulk inserts\n(COPY) followed by multi-row selects/updates into the final table. I don't\nknow how much below 3ms you are going to get, but that's going to be as\nfast as you can possibly do it on your hardware, assuming the rest of your\nconfiguration is as efficient as possible.\n\nOn Sun, Apr 1, 2012 at 6:11 PM, Andrew Dunstan <[email protected]> wrote:\n\n\nOn 04/01/2012 08:29 PM, Claudio Freire wrote:\n\nOn Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan<[email protected]> wrote:\n\nYou could try using Unix domain socket and see if the performance\nimproves. A relevant link:\n\n\nHe said Windows. There are no Unix domain sockets on Windows. (And please\ndon't top-post)\n\nWindows supports named pipes, which are functionally similar, but I\ndon't think pg supports them.\n\n\n\nCorrect, so telling the OP to have a look at them isn't at all helpful. And they are not supported on all Windows platforms we support either (specifically not on XP, AIUI).But suggesting moving away from TCP/IP with no actual evidence that it is network overhead that is the problem is a little premature, regardless. What, exactly, are the set of operations that each update is performing and is there any way to batch them into fewer statements within the transaction. For example, could you insert all 60,000 records into a temporary table via COPY, then run just a couple of queries to do bulk inserts and bulk updates into the destination tble via joins to the temp table? 60,000 rows updated with 25 columns, 1 indexed in 3ms is not exactly slow. That's a not insignificant quantity of data which must be transferred from client to server, parsed, and then written to disk, regardless of TCP overhead. That is happening via at least 60,000 individual SQL statements that are not even prepared statements. I don't imagine that TCP overhead is really the problem here. Regardless, you can reduce both statement parse time and TCP overhead by doing bulk inserts (COPY) followed by multi-row selects/updates into the final table. I don't know how much below 3ms you are going to get, but that's going to be as fast as you can possibly do it on your hardware, assuming the rest of your configuration is as efficient as possible.",
"msg_date": "Mon, 2 Apr 2012 01:25:09 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "\n\nOn 04/01/2012 09:11 PM, Andrew Dunstan wrote:\n>\n>\n> On 04/01/2012 08:29 PM, Claudio Freire wrote:\n>> On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan<[email protected]> \n>> wrote:\n>>>> You could try using Unix domain socket and see if the performance\n>>>> improves. A relevant link:\n>>>\n>>> He said Windows. There are no Unix domain sockets on Windows. (And \n>>> please\n>>> don't top-post)\n>> Windows supports named pipes, which are functionally similar, but I\n>> don't think pg supports them.\n>>\n>\n> Correct, so telling the OP to have a look at them isn't at all \n> helpful. And they are not supported on all Windows platforms we \n> support either (specifically not on XP, AIUI).\n>\n>\n\nApparently I was mistaken about the availability. However, my initial \npoint remains. Since all our client/server comms on Windows are over \nTCP, telling the OP to look at Unix domain sockets is unhelpful.\n\ncheers\n\nandrew\n",
"msg_date": "Mon, 02 Apr 2012 07:34:45 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Sun, Apr 1, 2012 at 1:24 PM, Ofer Israeli <[email protected]> wrote:\n> Hi all,\n>\n> We are running performance tests using PG 8.3 on a Windows 2008 R2 machine\n> connecting locally over TCP.\n> In our tests, we have found that it takes ~3ms to update a table with ~25\n> columns and 60K records, with one column indexed.\n\nI assume you mean 3ms per row, as per 3ms per 60,000 rows (or per\n5,000 rows?) seems improbably fast.\n\n> We have reached this number after many tweaks of the database configuraiton\n> and one of the changes we made was to perform the updates in batches of 5K\n> as opposed to the pervious transaction per event. Note that our use of\n> batches is to have only one transaction, but still each of the 5K events is\n> independently SELECTing and UPDATEing records, i.e. it is not all contained\n> in a stored procedure or such.\n>\n> Still these times are too high for us and we are looking to lower them and I\n> am wondering about the TCP/IP overhead of passing the information back and\n> forth. Does anyone have any numbers in what the TCP could cost in the\n> configuration mentioned above or pointers on how to test it?\n\nChange all your updates to selects, with the same where clause. If\ndoing that makes it much faster, TCP must not have been your\nbottleneck.\n\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 2 Apr 2012 09:51:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Sun, Apr 2, 2012 at 11:25 AM, Samuel Gendler < [email protected] > wrote:\n> But suggesting moving away from TCP/IP with no actual evidence that it is network overhead that is the problem is a little premature, regardless. \n\nAgreed, that's why I'd like to understand what tools / methodologies are available in order to test whether TCP is the issue.\n\n> What, exactly, are the set of operations that each update is performing and is there any way to batch them into fewer statements \n> within the transaction. For example, could you insert all 60,000 records into a temporary table via COPY, then run just a couple of queries to do \n> bulk inserts and bulk updates into the destination tble via joins to the temp table? \n\nI don't see how a COPY can be faster here as I would need to both run the COPY into the temp table and then UPDATE all the columns in the real table.\nAre you referring to saving the time where all the UPDATEs would be performed via a stored procedure strictly in the db domain without networking back and forth?\n\n> 60,000 rows updated with 25 columns, 1 indexed in 3ms is not exactly slow. That's a not insignificant quantity of data which must be transferred from client to server, \n> parsed, and then written to disk, regardless of TCP overhead. That is happening via at least 60,000 individual SQL statements that are not even prepared statements. I don't \n> imagine that TCP overhead is really the problem here. Regardless, you can reduce both statement parse time and TCP overhead by doing bulk inserts \n> (COPY) followed by multi-row selects/updates into the final table. I don't know how much below 3ms you are going to get, but that's going to be as fast \n> as you can possibly do it on your hardware, assuming the rest of your configuration is as efficient as possible.\n\nThe 3ms is per each event processing, not the whole 60K batch. Each event processing includes:\n5 SELECTs\n1 DELETE\n2 UPDATEs\nwhere each query performed involves TCP connections, that is, the queries are not grouped in a stored procedure or such.\n\nFor all these queries does 3ms sound like a reasonable time? If so, do you have an estimation of how long the network portion would be here?\n\nThanks,\nOfer\n\n",
"msg_date": "Tue, 3 Apr 2012 18:24:07 +0300",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 12:24 PM, Ofer Israeli <[email protected]> wrote:\n> On Sun, Apr 2, 2012 at 11:25 AM, Samuel Gendler < [email protected] > wrote:\n>> But suggesting moving away from TCP/IP with no actual evidence that it is network overhead that is the problem is a little premature, regardless.\n>\n> Agreed, that's why I'd like to understand what tools / methodologies are available in order to test whether TCP is the issue.\n\nAs it was pointed out already, if you perform 60.000 x (5+1+2) \"select\n1\" queries you'll effectively measure TCP overhead, as planning and\nexecution will be down to negligible times.\n\n>> What, exactly, are the set of operations that each update is performing and is there any way to batch them into fewer statements\n>> within the transaction. For example, could you insert all 60,000 records into a temporary table via COPY, then run just a couple of queries to do\n>> bulk inserts and bulk updates into the destination tble via joins to the temp table?\n>\n> I don't see how a COPY can be faster here as I would need to both run the COPY into the temp table and then UPDATE all the columns in the real table.\n> Are you referring to saving the time where all the UPDATEs would be performed via a stored procedure strictly in the db domain without networking back and forth?\n\nYou'll be saving a lot of planning and parsing time, as COPY is\nsignificantly simpler to plan and parse, and the complex UPDATEs and\nINSERTs required to move data from the temp table will only incur a\none-time planning cost. In general, doing it that way is significantly\nfaster than 480.000 separate queries. But it does depend on the\noperations themselves.\n\n>> 60,000 rows updated with 25 columns, 1 indexed in 3ms is not exactly slow. That's a not insignificant quantity of data which must be transferred from client to server,\n>> parsed, and then written to disk, regardless of TCP overhead. That is happening via at least 60,000 individual SQL statements that are not even prepared statements. I don't\n>> imagine that TCP overhead is really the problem here. Regardless, you can reduce both statement parse time and TCP overhead by doing bulk inserts\n>> (COPY) followed by multi-row selects/updates into the final table. I don't know how much below 3ms you are going to get, but that's going to be as fast\n>> as you can possibly do it on your hardware, assuming the rest of your configuration is as efficient as possible.\n>\n> The 3ms is per each event processing, not the whole 60K batch. Each event processing includes:\n> 5 SELECTs\n> 1 DELETE\n> 2 UPDATEs\n> where each query performed involves TCP connections, that is, the queries are not grouped in a stored procedure or such.\n\nIf you run the 480.000 queries on a single transaction, you use a\nsingle connection already. So you only have transmission overhead,\nwithout the TCP handshake. You still might gain a bit by disabling\nNagle's algorithm (if that's possible in windows), which is the main\nsource of latency for TCP. But that's very low-level tinkering.\n\n> For all these queries does 3ms sound like a reasonable time? If so, do you have an estimation of how long the network portion would be here?\n\nYou perform 8 roundtrips minimum per event, so that's 375us per query.\nIt doesn't look like much. That's probably Nagle and task switching\ntime, I don't think you can get it much lower than that, without\nissuing less queries (ie: using the COPY method).\n",
"msg_date": "Tue, 3 Apr 2012 12:38:34 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 10:38 AM, Claudio Freire <[email protected]>wrote:\n\n>\n> You perform 8 roundtrips minimum per event, so that's 375us per query.\n> It doesn't look like much. That's probably Nagle and task switching\n> time, I don't think you can get it much lower than that, without\n> issuing less queries (ie: using the COPY method).\n>\n>\nI may be missing something stated earlier, but surely there are options in\nbetween 7 individual statements and resorting to COPY and temp tables.\n\nI'm thinking of a set of precompiled queries / prepared statements along\nthe lines of \"SELECT FOR UPDATE WHERE foo in (?, ?, ?, .... ?)\" that handle\ne.g. 500-1000 records per invocation. Or what about a stored procedure that\nupdates one record, performing the necessary 7 steps, and then calling that\nin bulk?\n\nI agree with the assessment that 375us per statement is pretty decent, and\nthat going after the communication channel (TCP vs local pipe) is chasing\npennies when there are $100 bills lying around waiting to be collected.\n\nOn Tue, Apr 3, 2012 at 10:38 AM, Claudio Freire <[email protected]> wrote:\n\nYou perform 8 roundtrips minimum per event, so that's 375us per query.\nIt doesn't look like much. That's probably Nagle and task switching\ntime, I don't think you can get it much lower than that, without\nissuing less queries (ie: using the COPY method).I may be missing something stated earlier, but surely there are options in between 7 individual statements and resorting to COPY and temp tables. \nI'm thinking of a set of precompiled queries / prepared statements along the lines of \"SELECT FOR UPDATE WHERE foo in (?, ?, ?, .... ?)\" that handle e.g. 500-1000 records per invocation. Or what about a stored procedure that updates one record, performing the necessary 7 steps, and then calling that in bulk?\nI agree with the assessment that 375us per statement is pretty decent, and that going after the communication channel (TCP vs local pipe) is chasing pennies when there are $100 bills lying around waiting to be collected.",
"msg_date": "Tue, 3 Apr 2012 11:04:29 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TCP Overhead on Local Loopback"
},
{
"msg_contents": "\r\nOn Tue, Apr 3, 2012 at 7:04 PM, Dave Crooke <[email protected]> wrote:\r\n>On Tue, Apr 3, 2012 at 10:38 AM, Claudio Freire <[email protected]> wrote:\r\n>> You perform 8 roundtrips minimum per event, so that's 375us per query.\r\n>> It doesn't look like much. That's probably Nagle and task switching\r\n>> time, I don't think you can get it much lower than that, without\r\n>> issuing less queries (ie: using the COPY method).\r\n\r\n> I may be missing something stated earlier, but surely there are options in between 7 individual statements and resorting to COPY and temp tables. \r\n\r\n> I'm thinking of a set of precompiled queries / prepared statements along the lines of \"SELECT FOR UPDATE WHERE foo in (?, ?, ?, .... ?)\" that handle e.g. 500-1000 records per invocation. Or what about a stored procedure that updates one record, performing the necessary 7 steps, and then calling that in bulk?\r\n\r\n> I agree with the assessment that 375us per statement is pretty decent, and that going after the communication channel (TCP vs local pipe) is chasing pennies when there are $100 bills lying around waiting to be collected.\r\n\r\nThanks for the suggestions. We ended up re-factoring the code: caching some of the data that we needed in order to eliminate some of the queries previously run and inserting data completion into update statements in the form of UPDATE SET ... (SELECT ...) which brought us down to only one SQL query as opposed to 7 and this brings the processing time down from 4.5ms (previously stated 3ms was not reproduced) down to ~1ms which is great for us.\r\n\r\n\r\nMany thanks for the help from all of you,\r\nOfer\r\n",
"msg_date": "Thu, 5 Apr 2012 12:39:59 +0300",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TCP Overhead on Local Loopback"
}
] |
[
{
"msg_contents": "Hello there,\n\nI am having performance problem with new DELL server. Actually I have this\ntwo servers\n\nServer A (old - production)\n-----------------\n2xCPU Six-Core AMD Opteron 2439 SE\n64GB RAM\nRaid controller Perc6 512MB cache NV\n - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers)\n - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n\nServer B (new)\n------------------\n2xCPU 16 Core AMD Opteron 6282 SE\n64GB RAM\nRaid controller H700 1GB cache NV\n - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\nbarriers)\nRaid controller H800 1GB cache nv\n - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n(ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n\nPostgres DB is the same in both servers. This DB has 170GB size with some\ntables partitioned by date with a trigger. In both shared_buffers,\ncheckpoint_segments... settings are similar because RAM is similar.\n\nI supposed that, new server had to be faster than old, because have more\ndisk in RAID10 and two RAID controllers with more cache memory, but really\nI'm not obtaining the expected results\n\nFor example this query:\n\nEXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id,\nc.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS\nc__fk_company_id, c.import_date AS c__import_date, c.highlight AS\nc__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS\nc__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title,\nc.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS\nc__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS\nc__fk_editions_news_id, c.dossier_selected AS c__dossier_selected,\nc.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image\nAS c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS\nm__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS\nm__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS\nm__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS\nm__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url,\nm.current_rank AS m__current_rank, m.typologyid AS m__typologyid,\nm.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS\nm__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS\nm__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id,\nn.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS\nn__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title,\nn.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date,\nn.economical_value AS n__economical_value, n.audience AS n__audience,\nn.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS\nn__url_news_old, n.url_image AS n__url_image, n.typologyid AS\nn__typologyid, n.author AS n__author, n.fk_platform_id AS\nn__fk_platform_id, n2.id AS n2__id, n2.name AS n2__name, n3.id AS n3__id,\nn3.name AS n3__name, f.id AS f__id, f.name AS f__name, n4.id AS n4__id,\nn4.opentext AS n4__opentext, i.id AS i__id, i.name AS i__name, i.ord AS\ni__ord, i2.id AS i2__id, i2.name AS i2__name FROM company_news_internet c LEFT\nJOIN media_internet m ON c.fk_media_id = m.id AND m.media_type = 4\nLEFT JOINnews_internet n ON c.fk_news_id =\nn.id AND n.media_type = 4 LEFT JOIN news_media_internet n2 ON\nn.fk_news_media_id = n2.id AND n2.media_type = 4 LEFT\nJOINnews_group_internet n3 ON c.fk_news_group_id =\nn3.id AND n3.media_type = 4 LEFT JOIN feed_internet f ON n3.fk_feed_id =\nf.id LEFT JOIN news_text_internet n4 ON c.fk_news_id = n4.fk_news_id AND\nn4.media_type = 4 LEFT JOIN internet_typology i ON n.typologyid = i.id LEFT\nJOIN internet_media_platform i2 ON n.fk_platform_id = i2.id\nWHERE(c.fk_company_id = '16073' AND c.status <> '-3' AND n3.fk_feed_id\n= '30693'\nAND n3.status = '1' AND f.fk_company_id = '16073') AND n.typologyid IN\n('6', '7', '1', '2', '3', '5', '4') AND c.id > '49764393' AND c.news_date\n>= '2012-04-02'::timestamp - INTERVAL '4 months' AND n.news_date >=\n'2012-04-02'::timestamp - INTERVAL '4 months' AND c.fk_news_group_id IN\n('43475') AND (c.media_type = 4) ORDER BY c.news_date DESC, c.id DESC LIMIT\n200\n\nTakes about 20 second in server A but in new server B takes 150 seconds...\nIn EXPLAIN I have noticed that sequential scan on table\nnews_internet_201112 takes 2s:\n -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12\nrows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without\ntime zone) AND (media_type = 4) AND (typologyid = ANY\n('{6,7,1,2,3,5,4}'::integer[])))\n\nWhile in Server B, takes 11s:\n -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12\nrows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1)\n Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without\ntime zone) AND (media_type = 4) AND (typologyid = ANY\n('{6,7,1,2,3,5,4}'::integer[])))\n\nIs notorious that, while in server A, execution time vary only few second\nwhen I execute the same query repeated times, in server B, execution time\nfluctuates between 30 and 150 second despite the server dont have any\nclient.\n\nIn other example, when I query entire table, running twice the same query:\nServer 1\n------------\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..457010.37\nrows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n Total runtime: 20391.555 ms\n-\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..457010.37\nrows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n Total runtime: 2831.028 ms\n\nServer 2\n------------\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\nrows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1)\n Total runtime: 11552.818 ms\n-\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\nrows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1)\n Total runtime: 12939.717 ms\n\nIt seems that Server B don cache the table¿?¿?\n\nI'm lost, I had tested different file systems, like XFS, stripe sizes...\nbut I not have had results\n\nAny ideas that could be happen?\n\nThanks a lot!!\n\n-- \nCésar Martín Pérez\[email protected]\n\nHello there,I am having performance problem with new DELL server. Actually I have this two servers\nServer A (old - production)-----------------\n2xCPU Six-Core AMD Opteron 2439 SE64GB RAMRaid controller Perc6 512MB cache NV - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers) - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\nServer B (new)------------------2xCPU 16 Core AMD Opteron 6282 SE64GB RAMRaid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no barriers)Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\nPostgres DB is the same in both servers. This DB has 170GB size with some tables partitioned by date with a trigger. In both shared_buffers, checkpoint_segments... settings are similar because RAM is similar.\nI supposed that, new server had to be faster than old, because have more disk in RAID10 and two RAID controllers with more cache memory, but really I'm not obtaining the expected results\nFor example this query:\nEXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id, c.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS c__fk_company_id, c.import_date AS c__import_date, c.highlight AS c__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS c__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title, c.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS c__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS c__fk_editions_news_id, c.dossier_selected AS c__dossier_selected, c.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image AS c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS m__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS m__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS m__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS m__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url, m.current_rank AS m__current_rank, m.typologyid AS m__typologyid, m.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS m__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS m__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id, n.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS n__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title, n.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date, n.economical_value AS n__economical_value, n.audience AS n__audience, n.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS n__url_news_old, n.url_image AS n__url_image, n.typologyid AS n__typologyid, n.author AS n__author, n.fk_platform_id AS n__fk_platform_id, n2.id AS n2__id, n2.name AS n2__name, n3.id AS n3__id, n3.name AS n3__name, f.id AS f__id, f.name AS f__name, n4.id AS n4__id, n4.opentext AS n4__opentext, i.id AS i__id, i.name AS i__name, i.ord AS i__ord, i2.id AS i2__id, i2.name AS i2__name FROM company_news_internet c LEFT JOIN media_internet m ON c.fk_media_id = m.id AND m.media_type = 4 LEFT JOIN news_internet n ON c.fk_news_id = n.id AND n.media_type = 4 LEFT JOIN news_media_internet n2 ON n.fk_news_media_id = n2.id AND n2.media_type = 4 LEFT JOIN news_group_internet n3 ON c.fk_news_group_id = n3.id AND n3.media_type = 4 LEFT JOIN feed_internet f ON n3.fk_feed_id = f.id LEFT JOIN news_text_internet n4 ON c.fk_news_id = n4.fk_news_id AND n4.media_type = 4 LEFT JOIN internet_typology i ON n.typologyid = i.id LEFT JOIN internet_media_platform i2 ON n.fk_platform_id = i2.id WHERE (c.fk_company_id = '16073' AND c.status <> '-3' AND n3.fk_feed_id = '30693' AND n3.status = '1' AND f.fk_company_id = '16073') AND n.typologyid IN ('6', '7', '1', '2', '3', '5', '4') AND c.id > '49764393' AND c.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND n.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND c.fk_news_group_id IN ('43475') AND (c.media_type = 4) ORDER BY c.news_date DESC, c.id DESC LIMIT 200\nTakes about 20 second in server A but in new server B takes 150 seconds... In EXPLAIN I have noticed that sequential scan on table news_internet_201112 takes 2s: -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12 rows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\nWhile in Server B, takes 11s: -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12 rows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1) Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\nIs notorious that, while in server A, execution time vary only few second when I execute the same query repeated times, in server B, execution time fluctuates between 30 and 150 second despite the server dont have any client.\nIn other example, when I query entire table, running twice the same query:Server 1\n------------EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ; QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n Total runtime: 20391.555 ms-EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n Total runtime: 2831.028 msServer 2------------EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1) Total runtime: 11552.818 ms-EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1) Total runtime: 12939.717 msIt seems that Server B don cache the table¿?¿?\nI'm lost, I had tested different file systems, like XFS, stripe sizes... but I not have had results Any ideas that could be happen?\nThanks a lot!!-- César Martín Pé[email protected]",
"msg_date": "Tue, 3 Apr 2012 14:20:47 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "H800 + md1200 Performance problem"
},
{
"msg_contents": "Did you check your read ahead settings (getra)?\n\nMike DelNegro\n\nSent from my iPhone\n\nOn Apr 3, 2012, at 8:20 AM, Cesar Martin <[email protected]> wrote:\n\n> Hello there,\n> \n> I am having performance problem with new DELL server. Actually I have this two servers\n> \n> Server A (old - production)\n> -----------------\n> 2xCPU Six-Core AMD Opteron 2439 SE\n> 64GB RAM\n> Raid controller Perc6 512MB cache NV\n> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers) \n> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n> \n> Server B (new)\n> ------------------\n> 2xCPU 16 Core AMD Opteron 6282 SE\n> 64GB RAM\n> Raid controller H700 1GB cache NV\n> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no barriers)\n> Raid controller H800 1GB cache nv\n> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n> \n> Postgres DB is the same in both servers. This DB has 170GB size with some tables partitioned by date with a trigger. In both shared_buffers, checkpoint_segments... settings are similar because RAM is similar.\n> \n> I supposed that, new server had to be faster than old, because have more disk in RAID10 and two RAID controllers with more cache memory, but really I'm not obtaining the expected results\n> \n> For example this query:\n> \n> EXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id, c.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS c__fk_company_id, c.import_date AS c__import_date, c.highlight AS c__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS c__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title, c.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS c__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS c__fk_editions_news_id, c.dossier_selected AS c__dossier_selected, c.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image AS c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS m__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS m__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS m__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS m__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url, m.current_rank AS m__current_rank, m.typologyid AS m__typologyid, m.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS m__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS m__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id, n.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS n__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title, n.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date, n.economical_value AS n__economical_value, n.audience AS n__audience, n.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS n__url_news_old, n.url_image AS n__url_image, n.typologyid AS n__typologyid, n.author AS n__author, n.fk_platform_id AS n__fk_platform_id, n2.id AS n2__id, n2.name AS n2__name, n3.id AS n3__id, n3.name AS n3__name, f.id AS f__id, f.name AS f__name, n4.id AS n4__id, n4.opentext AS n4__opentext, i.id AS i__id, i.name AS i__name, i.ord AS i__ord, i2.id AS i2__id, i2.name AS i2__name FROM company_news_internet c LEFT JOIN media_internet m ON c.fk_media_id = m.id AND m.media_type = 4 LEFT JOIN news_internet n ON c.fk_news_id = n.id AND n.media_type = 4 LEFT JOIN news_media_internet n2 ON n.fk_news_media_id = n2.id AND n2.media_type = 4 LEFT JOIN news_group_internet n3 ON c.fk_news_group_id = n3.id AND n3.media_type = 4 LEFT JOIN feed_internet f ON n3.fk_feed_id = f.id LEFT JOIN news_text_internet n4 ON c.fk_news_id = n4.fk_news_id AND n4.media_type = 4 LEFT JOIN internet_typology i ON n.typologyid = i.id LEFT JOIN internet_media_platform i2 ON n.fk_platform_id = i2.id WHERE (c.fk_company_id = '16073' AND c.status <> '-3' AND n3.fk_feed_id = '30693' AND n3.status = '1' AND f.fk_company_id = '16073') AND n.typologyid IN ('6', '7', '1', '2', '3', '5', '4') AND c.id > '49764393' AND c.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND n.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND c.fk_news_group_id IN ('43475') AND (c.media_type = 4) ORDER BY c.news_date DESC, c.id DESC LIMIT 200\n> \n> Takes about 20 second in server A but in new server B takes 150 seconds... In EXPLAIN I have noticed that sequential scan on table news_internet_201112 takes 2s:\n> -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12 rows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n> Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\n> \n> While in Server B, takes 11s:\n> -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12 rows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1)\n> Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\n> \n> Is notorious that, while in server A, execution time vary only few second when I execute the same query repeated times, in server B, execution time fluctuates between 30 and 150 second despite the server dont have any client.\n> \n> In other example, when I query entire table, running twice the same query:\n> Server 1\n> ------------\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n> Total runtime: 20391.555 ms\n> -\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n> Total runtime: 2831.028 ms\n> \n> Server 2\n> ------------\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1)\n> Total runtime: 11552.818 ms\n> -\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1)\n> Total runtime: 12939.717 ms\n> \n> It seems that Server B don cache the table¿?¿?\n> \n> I'm lost, I had tested different file systems, like XFS, stripe sizes... but I not have had results \n> \n> Any ideas that could be happen?\n> \n> Thanks a lot!!\n> \n> -- \n> César Martín Pérez\n> [email protected]\n> \n> \nDid you check your read ahead settings (getra)?Mike DelNegroSent from my iPhoneOn Apr 3, 2012, at 8:20 AM, Cesar Martin <[email protected]> wrote:Hello there,I am having performance problem with new DELL server. Actually I have this two servers\nServer A (old - production)-----------------\n2xCPU Six-Core AMD Opteron 2439 SE64GB RAMRaid controller Perc6 512MB cache NV - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers) - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\nServer B (new)------------------2xCPU 16 Core AMD Opteron 6282 SE64GB RAMRaid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no barriers)Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\nPostgres DB is the same in both servers. This DB has 170GB size with some tables partitioned by date with a trigger. In both shared_buffers, checkpoint_segments... settings are similar because RAM is similar.\nI supposed that, new server had to be faster than old, because have more disk in RAID10 and two RAID controllers with more cache memory, but really I'm not obtaining the expected results\nFor example this query:\nEXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id, c.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS c__fk_company_id, c.import_date AS c__import_date, c.highlight AS c__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS c__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title, c.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS c__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS c__fk_editions_news_id, c.dossier_selected AS c__dossier_selected, c.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image AS c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS m__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS m__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS m__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS m__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url, m.current_rank AS m__current_rank, m.typologyid AS m__typologyid, m.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS m__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS m__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id, n.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS n__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title, n.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date, n.economical_value AS n__economical_value, n.audience AS n__audience, n.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS n__url_news_old, n.url_image AS n__url_image, n.typologyid AS n__typologyid, n.author AS n__author, n.fk_platform_id AS n__fk_platform_id, n2.id AS n2__id, n2.name AS n2__name, n3.id AS n3__id, n3.name AS n3__name, f.id AS f__id, f.name AS f__name, n4.id AS n4__id, n4.opentext AS n4__opentext, i.id AS i__id, i.name AS i__name, i.ord AS i__ord, i2.id AS i2__id, i2.name AS i2__name FROM company_news_internet c LEFT JOIN media_internet m ON c.fk_media_id = m.id AND m.media_type = 4 LEFT JOIN news_internet n ON c.fk_news_id = n.id AND n.media_type = 4 LEFT JOIN news_media_internet n2 ON n.fk_news_media_id = n2.id AND n2.media_type = 4 LEFT JOIN news_group_internet n3 ON c.fk_news_group_id = n3.id AND n3.media_type = 4 LEFT JOIN feed_internet f ON n3.fk_feed_id = f.id LEFT JOIN news_text_internet n4 ON c.fk_news_id = n4.fk_news_id AND n4.media_type = 4 LEFT JOIN internet_typology i ON n.typologyid = i.id LEFT JOIN internet_media_platform i2 ON n.fk_platform_id = i2.id WHERE (c.fk_company_id = '16073' AND c.status <> '-3' AND n3.fk_feed_id = '30693' AND n3.status = '1' AND f.fk_company_id = '16073') AND n.typologyid IN ('6', '7', '1', '2', '3', '5', '4') AND c.id > '49764393' AND c.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND n.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND c.fk_news_group_id IN ('43475') AND (c.media_type = 4) ORDER BY c.news_date DESC, c.id DESC LIMIT 200\nTakes about 20 second in server A but in new server B takes 150 seconds... In EXPLAIN I have noticed that sequential scan on table news_internet_201112 takes 2s: -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12 rows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\nWhile in Server B, takes 11s: -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12 rows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1) Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\nIs notorious that, while in server A, execution time vary only few second when I execute the same query repeated times, in server B, execution time fluctuates between 30 and 150 second despite the server dont have any client.\nIn other example, when I query entire table, running twice the same query:Server 1\n------------EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ; QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n Total runtime: 20391.555 ms-EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n Total runtime: 2831.028 msServer 2------------EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1) Total runtime: 11552.818 ms-EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1) Total runtime: 12939.717 msIt seems that Server B don cache the table¿?¿?\nI'm lost, I had tested different file systems, like XFS, stripe sizes... but I not have had results Any ideas that could be happen?\nThanks a lot!!-- César Martín Pé[email protected]",
"msg_date": "Tue, 3 Apr 2012 08:37:42 -0400",
"msg_from": "Mike DelNegro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "Hi Mike,\nThank you for your fast response.\n\nblockdev --getra /dev/sdc\n256\n\nWhat value do you recommend for this setting?\n\nThanks!\n\nEl 3 de abril de 2012 14:37, Mike DelNegro <[email protected]> escribió:\n\n> Did you check your read ahead settings (getra)?\n>\n> Mike DelNegro\n>\n> Sent from my iPhone\n>\n> On Apr 3, 2012, at 8:20 AM, Cesar Martin <[email protected]> wrote:\n>\n> Hello there,\n>\n> I am having performance problem with new DELL server. Actually I have this\n> two servers\n>\n> Server A (old - production)\n> -----------------\n> 2xCPU Six-Core AMD Opteron 2439 SE\n> 64GB RAM\n> Raid controller Perc6 512MB cache NV\n> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no\n> barriers)\n> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n>\n> Server B (new)\n> ------------------\n> 2xCPU 16 Core AMD Opteron 6282 SE\n> 64GB RAM\n> Raid controller H700 1GB cache NV\n> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\n> barriers)\n> Raid controller H800 1GB cache nv\n> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n>\n> Postgres DB is the same in both servers. This DB has 170GB size with some\n> tables partitioned by date with a trigger. In both shared_buffers,\n> checkpoint_segments... settings are similar because RAM is similar.\n>\n> I supposed that, new server had to be faster than old, because have more\n> disk in RAID10 and two RAID controllers with more cache memory, but really\n> I'm not obtaining the expected results\n>\n> For example this query:\n>\n> EXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id,\n> c.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS\n> c__fk_company_id, c.import_date AS c__import_date, c.highlight AS\n> c__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS\n> c__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title,\n> c.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS\n> c__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS\n> c__fk_editions_news_id, c.dossier_selected AS c__dossier_selected,\n> c.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image\n> AS c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS\n> m__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS\n> m__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS\n> m__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS\n> m__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url,\n> m.current_rank AS m__current_rank, m.typologyid AS m__typologyid,\n> m.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS\n> m__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS\n> m__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id,\n> n.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS\n> n__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title,\n> n.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date,\n> n.economical_value AS n__economical_value, n.audience AS n__audience,\n> n.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS\n> n__url_news_old, n.url_image AS n__url_image, n.typologyid AS\n> n__typologyid, n.author AS n__author, n.fk_platform_id AS\n> n__fk_platform_id, n2.id AS n2__id, n2.name AS n2__name, n3.id AS n3__id,\n> n3.name AS n3__name, f.id AS f__id, f.name AS f__name, n4.id AS n4__id,\n> n4.opentext AS n4__opentext, i.id AS i__id, i.name AS i__name, i.ord AS\n> i__ord, i2.id AS i2__id, i2.name AS i2__name FROM company_news_internet c LEFT\n> JOIN media_internet m ON c.fk_media_id = m.id AND m.media_type = 4 LEFT\n> JOIN news_internet n ON c.fk_news_id = n.id AND n.media_type = 4 LEFT JOINnews_media_internet n2 ON n.fk_news_media_id =\n> n2.id AND n2.media_type = 4 LEFT JOIN news_group_internet n3 ON\n> c.fk_news_group_id = n3.id AND n3.media_type = 4 LEFT JOIN feed_internet\n> f ON n3.fk_feed_id = f.id LEFT JOIN news_text_internet n4 ON c.fk_news_id\n> = n4.fk_news_id AND n4.media_type = 4 LEFT JOIN internet_typology i ON\n> n.typologyid = i.id LEFT JOIN internet_media_platform i2 ON\n> n.fk_platform_id = i2.id WHERE (c.fk_company_id = '16073' AND c.status <>\n> '-3' AND n3.fk_feed_id = '30693' AND n3.status = '1' AND f.fk_company_id =\n> '16073') AND n.typologyid IN ('6', '7', '1', '2', '3', '5', '4') AND c.id> '49764393' AND c.news_date >= '2012-04-02'::timestamp -\n> INTERVAL '4 months' AND n.news_date >= '2012-04-02'::timestamp - INTERVAL'4 months' AND c.fk_news_group_id IN ('43475') AND (c.media_type = 4) ORDER\n> BY c.news_date DESC, c.id DESC LIMIT 200\n>\n> Takes about 20 second in server A but in new server B takes 150 seconds...\n> In EXPLAIN I have noticed that sequential scan on table\n> news_internet_201112 takes 2s:\n> -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12\n> rows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n> Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without\n> time zone) AND (media_type = 4) AND (typologyid = ANY\n> ('{6,7,1,2,3,5,4}'::integer[])))\n>\n> While in Server B, takes 11s:\n> -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12\n> rows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1)\n> Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without\n> time zone) AND (media_type = 4) AND (typologyid = ANY\n> ('{6,7,1,2,3,5,4}'::integer[])))\n>\n> Is notorious that, while in server A, execution time vary only few second\n> when I execute the same query repeated times, in server B, execution time\n> fluctuates between 30 and 150 second despite the server dont have any\n> client.\n>\n> In other example, when I query entire table, running twice the same query:\n> Server 1\n> ------------\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..457010.37\n> rows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n> Total runtime: 20391.555 ms\n> -\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..457010.37\n> rows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n> Total runtime: 2831.028 ms\n>\n> Server 2\n> ------------\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1)\n> Total runtime: 11552.818 ms\n> -\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1)\n> Total runtime: 12939.717 ms\n>\n> It seems that Server B don cache the table¿?¿?\n>\n> I'm lost, I had tested different file systems, like XFS, stripe sizes...\n> but I not have had results\n>\n> Any ideas that could be happen?\n>\n> Thanks a lot!!\n>\n> --\n> César Martín Pérez\n> [email protected]\n>\n>\n>\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nHi Mike,Thank you for your fast response.blockdev --getra /dev/sdc256What value do you recommend for this setting?Thanks!\nEl 3 de abril de 2012 14:37, Mike DelNegro <[email protected]> escribió:\nDid you check your read ahead settings (getra)?Mike DelNegroSent from my iPhoneOn Apr 3, 2012, at 8:20 AM, Cesar Martin <[email protected]> wrote:\nHello there,I am having performance problem with new DELL server. Actually I have this two servers\n\nServer A (old - production)-----------------\n2xCPU Six-Core AMD Opteron 2439 SE64GB RAMRaid controller Perc6 512MB cache NV - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers) - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\nServer B (new)------------------2xCPU 16 Core AMD Opteron 6282 SE64GB RAMRaid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no barriers)Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\nPostgres DB is the same in both servers. This DB has 170GB size with some tables partitioned by date with a trigger. In both shared_buffers, checkpoint_segments... settings are similar because RAM is similar.\nI supposed that, new server had to be faster than old, because have more disk in RAID10 and two RAID controllers with more cache memory, but really I'm not obtaining the expected results\nFor example this query:\nEXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id, c.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS c__fk_company_id, c.import_date AS c__import_date, c.highlight AS c__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS c__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title, c.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS c__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS c__fk_editions_news_id, c.dossier_selected AS c__dossier_selected, c.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image AS c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS m__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS m__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS m__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS m__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url, m.current_rank AS m__current_rank, m.typologyid AS m__typologyid, m.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS m__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS m__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id, n.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS n__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title, n.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date, n.economical_value AS n__economical_value, n.audience AS n__audience, n.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS n__url_news_old, n.url_image AS n__url_image, n.typologyid AS n__typologyid, n.author AS n__author, n.fk_platform_id AS n__fk_platform_id, n2.id AS n2__id, n2.name AS n2__name, n3.id AS n3__id, n3.name AS n3__name, f.id AS f__id, f.name AS f__name, n4.id AS n4__id, n4.opentext AS n4__opentext, i.id AS i__id, i.name AS i__name, i.ord AS i__ord, i2.id AS i2__id, i2.name AS i2__name FROM company_news_internet c LEFT JOIN media_internet m ON c.fk_media_id = m.id AND m.media_type = 4 LEFT JOIN news_internet n ON c.fk_news_id = n.id AND n.media_type = 4 LEFT JOIN news_media_internet n2 ON n.fk_news_media_id = n2.id AND n2.media_type = 4 LEFT JOIN news_group_internet n3 ON c.fk_news_group_id = n3.id AND n3.media_type = 4 LEFT JOIN feed_internet f ON n3.fk_feed_id = f.id LEFT JOIN news_text_internet n4 ON c.fk_news_id = n4.fk_news_id AND n4.media_type = 4 LEFT JOIN internet_typology i ON n.typologyid = i.id LEFT JOIN internet_media_platform i2 ON n.fk_platform_id = i2.id WHERE (c.fk_company_id = '16073' AND c.status <> '-3' AND n3.fk_feed_id = '30693' AND n3.status = '1' AND f.fk_company_id = '16073') AND n.typologyid IN ('6', '7', '1', '2', '3', '5', '4') AND c.id > '49764393' AND c.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND n.news_date >= '2012-04-02'::timestamp - INTERVAL '4 months' AND c.fk_news_group_id IN ('43475') AND (c.media_type = 4) ORDER BY c.news_date DESC, c.id DESC LIMIT 200\nTakes about 20 second in server A but in new server B takes 150 seconds... In EXPLAIN I have noticed that sequential scan on table news_internet_201112 takes 2s: -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12 rows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\n\nWhile in Server B, takes 11s: -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12 rows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1) Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without time zone) AND (media_type = 4) AND (typologyid = ANY ('{6,7,1,2,3,5,4}'::integer[])))\nIs notorious that, while in server A, execution time vary only few second when I execute the same query repeated times, in server B, execution time fluctuates between 30 and 150 second despite the server dont have any client.\nIn other example, when I query entire table, running twice the same query:Server 1\n------------EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ; QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n Total runtime: 20391.555 ms-EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..457010.37 rows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n Total runtime: 2831.028 msServer 2------------EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1) Total runtime: 11552.818 ms-EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1) Total runtime: 12939.717 msIt seems that Server B don cache the table¿?¿?\nI'm lost, I had tested different file systems, like XFS, stripe sizes... but I not have had results Any ideas that could be happen?\nThanks a lot!!-- César Martín Pé[email protected]\n\n\n-- César Martín Pé[email protected]",
"msg_date": "Tue, 3 Apr 2012 14:59:09 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 7:20 AM, Cesar Martin <[email protected]> wrote:\n> Hello there,\n>\n> I am having performance problem with new DELL server. Actually I have this\n> two servers\n>\n> Server A (old - production)\n> -----------------\n> 2xCPU Six-Core AMD Opteron 2439 SE\n> 64GB RAM\n> Raid controller Perc6 512MB cache NV\n> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers)\n> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n>\n> Server B (new)\n> ------------------\n> 2xCPU 16 Core AMD Opteron 6282 SE\n> 64GB RAM\n> Raid controller H700 1GB cache NV\n> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\n> barriers)\n> Raid controller H800 1GB cache nv\n> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n>\n> Postgres DB is the same in both servers. This DB has 170GB size with some\n> tables partitioned by date with a trigger. In both shared_buffers,\n> checkpoint_segments... settings are similar because RAM is similar.\n>\n> I supposed that, new server had to be faster than old, because have more\n> disk in RAID10 and two RAID controllers with more cache memory, but really\n> I'm not obtaining the expected results\n>\n> For example this query:\n>\n> EXPLAIN ANALYZE SELECT c.id AS c__id, c.fk_news_id AS c__fk_news_id,\n> c.fk_news_group_id AS c__fk_news_group_id, c.fk_company_id AS\n> c__fk_company_id, c.import_date AS c__import_date, c.highlight AS\n> c__highlight, c.status AS c__status, c.ord AS c__ord, c.news_date AS\n> c__news_date, c.fk_media_id AS c__fk_media_id, c.title AS c__title,\n> c.search_title_idx AS c__search_title_idx, c.stored AS c__stored, c.tono AS\n> c__tono, c.media_type AS c__media_type, c.fk_editions_news_id AS\n> c__fk_editions_news_id, c.dossier_selected AS c__dossier_selected,\n> c.update_stats AS c__update_stats, c.url_news AS c__url_news, c.url_image AS\n> c__url_image, m.id AS m__id, m.name AS m__name, m.media_type AS\n> m__media_type, m.media_code AS m__media_code, m.fk_data_source_id AS\n> m__fk_data_source_id, m.language_iso AS m__language_iso, m.country_iso AS\n> m__country_iso, m.region_iso AS m__region_iso, m.subregion_iso AS\n> m__subregion_iso, m.media_code_temp AS m__media_code_temp, m.url AS m__url,\n> m.current_rank AS m__current_rank, m.typologyid AS m__typologyid,\n> m.fk_platform_id AS m__fk_platform_id, m.page_views_per_day AS\n> m__page_views_per_day, m.audience AS m__audience, m.last_stats_update AS\n> m__last_stats_update, n.id AS n__id, n.fk_media_id AS n__fk_media_id,\n> n.fk_news_media_id AS n__fk_news_media_id, n.fk_data_source_id AS\n> n__fk_data_source_id, n.news_code AS n__news_code, n.title AS n__title,\n> n.searchfull_idx AS n__searchfull_idx, n.news_date AS n__news_date,\n> n.economical_value AS n__economical_value, n.audience AS n__audience,\n> n.media_type AS n__media_type, n.url_news AS n__url_news, n.url_news_old AS\n> n__url_news_old, n.url_image AS n__url_image, n.typologyid AS n__typologyid,\n> n.author AS n__author, n.fk_platform_id AS n__fk_platform_id, n2.id AS\n> n2__id, n2.name AS n2__name, n3.id AS n3__id, n3.name AS n3__name, f.id AS\n> f__id, f.name AS f__name, n4.id AS n4__id, n4.opentext AS n4__opentext, i.id\n> AS i__id, i.name AS i__name, i.ord AS i__ord, i2.id AS i2__id, i2.name AS\n> i2__name FROM company_news_internet c LEFT JOIN media_internet m ON\n> c.fk_media_id = m.id AND m.media_type = 4 LEFT JOIN news_internet n ON\n> c.fk_news_id = n.id AND n.media_type = 4 LEFT JOIN news_media_internet n2 ON\n> n.fk_news_media_id = n2.id AND n2.media_type = 4 LEFT JOIN\n> news_group_internet n3 ON c.fk_news_group_id = n3.id AND n3.media_type = 4\n> LEFT JOIN feed_internet f ON n3.fk_feed_id = f.id LEFT JOIN\n> news_text_internet n4 ON c.fk_news_id = n4.fk_news_id AND n4.media_type = 4\n> LEFT JOIN internet_typology i ON n.typologyid = i.id LEFT JOIN\n> internet_media_platform i2 ON n.fk_platform_id = i2.id WHERE\n> (c.fk_company_id = '16073' AND c.status <> '-3' AND n3.fk_feed_id = '30693'\n> AND n3.status = '1' AND f.fk_company_id = '16073') AND n.typologyid IN ('6',\n> '7', '1', '2', '3', '5', '4') AND c.id > '49764393' AND c.news_date >=\n> '2012-04-02'::timestamp - INTERVAL '4 months' AND n.news_date >=\n> '2012-04-02'::timestamp - INTERVAL '4 months' AND c.fk_news_group_id IN\n> ('43475') AND (c.media_type = 4) ORDER BY c.news_date DESC, c.id DESC LIMIT\n> 200\n>\n> Takes about 20 second in server A but in new server B takes 150 seconds...\n> In EXPLAIN I have noticed that sequential scan on table news_internet_201112\n> takes 2s:\n> -> Seq Scan on news_internet_201112 n (cost=0.00..119749.12\n> rows=1406528 width=535) (actual time=0.046..2186.379 rows=1844831 loops=1)\n> Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without\n> time zone) AND (media_type = 4) AND (typologyid = ANY\n> ('{6,7,1,2,3,5,4}'::integer[])))\n>\n> While in Server B, takes 11s:\n> -> Seq Scan on news_internet_201112 n (cost=0.00..119520.12\n> rows=1405093 width=482) (actual time=0.177..11783.621 rows=1844831 loops=1)\n> Filter: ((news_date >= '2011-12-02 00:00:00'::timestamp without\n> time zone) AND (media_type = 4) AND (typologyid = ANY\n> ('{6,7,1,2,3,5,4}'::integer[])))\n>\n> Is notorious that, while in server A, execution time vary only few second\n> when I execute the same query repeated times, in server B, execution time\n> fluctuates between 30 and 150 second despite the server dont have any\n> client.\n>\n> In other example, when I query entire table, running twice the same query:\n> Server 1\n> ------------\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..457010.37\n> rows=6731337 width=318) (actual time=0.042..19665.155 rows=6731337 loops=1)\n> Total runtime: 20391.555 ms\n> -\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..457010.37\n> rows=6731337 width=318) (actual time=0.012..2171.181 rows=6731337 loops=1)\n> Total runtime: 2831.028 ms\n>\n> Server 2\n> ------------\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.110..10010.443 rows=6765779 loops=1)\n> Total runtime: 11552.818 ms\n> -\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111 ;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.023..8173.801 rows=6765779 loops=1)\n> Total runtime: 12939.717 ms\n>\n> It seems that Server B don cache the table¿?¿?\n>\n> I'm lost, I had tested different file systems, like XFS, stripe sizes... but\n> I not have had results\n>\n> Any ideas that could be happen?\n>\n> Thanks a lot!!\n\nThat's a significant regression. Probable hardware issue -- have you\nrun performance tests on it such as bonnie++? dd? What's iowait\nduring the scan?\n\nmerlin\n",
"msg_date": "Tue, 3 Apr 2012 08:11:32 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 3.4.2012 14:59, Cesar Martin wrote:\n> Hi Mike,\n> Thank you for your fast response.\n> \n> blockdev --getra /dev/sdc\n> 256\n\nThat's way too low. Is this setting the same on both machines?\n\nAnyway, set it to 4096, 8192 or even 16384 and check the difference.\n\nBTW explain analyze is nice, but it's only half the info, especially\nwhen the issue is outside PostgreSQL (hw, OS, ...). Please, provide\nsamples from iostat / vmstat or tools like that.\n\nTomas\n",
"msg_date": "Tue, 03 Apr 2012 15:21:42 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 6:20 AM, Cesar Martin <[email protected]> wrote:\n> Hello there,\n>\n> I am having performance problem with new DELL server. Actually I have this\n> two servers\n>\n> Server A (old - production)\n> -----------------\n> 2xCPU Six-Core AMD Opteron 2439 SE\n> 64GB RAM\n> Raid controller Perc6 512MB cache NV\n> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers)\n> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n>\n> Server B (new)\n> ------------------\n> 2xCPU 16 Core AMD Opteron 6282 SE\n> 64GB RAM\n> Raid controller H700 1GB cache NV\n> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\n> barriers)\n> Raid controller H800 1GB cache nv\n> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n>\n> Postgres DB is the same in both servers. This DB has 170GB size with some\n> tables partitioned by date with a trigger. In both shared_buffers,\n> checkpoint_segments... settings are similar because RAM is similar.\n>\n> I supposed that, new server had to be faster than old, because have more\n> disk in RAID10 and two RAID controllers with more cache memory, but really\n> I'm not obtaining the expected results\n\nWhat does\n\nsysctl -n vm.zone_reclaim_mode\n\nsay? If it says 1, change it to 0:\n\nsysctl -w zone_reclaim_mode=0\n\nIt's an automatic setting designed to make large virtual hosting\nservers etc run faster but totally screws with pg and file servers\nwith big numbers of cores and large memory spaces.\n",
"msg_date": "Tue, 3 Apr 2012 09:32:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 9:32 AM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Apr 3, 2012 at 6:20 AM, Cesar Martin <[email protected]> wrote:\n>> Hello there,\n>>\n>> I am having performance problem with new DELL server. Actually I have this\n>> two servers\n>>\n>> Server A (old - production)\n>> -----------------\n>> 2xCPU Six-Core AMD Opteron 2439 SE\n>> 64GB RAM\n>> Raid controller Perc6 512MB cache NV\n>> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers)\n>> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n>>\n>> Server B (new)\n>> ------------------\n>> 2xCPU 16 Core AMD Opteron 6282 SE\n>> 64GB RAM\n>> Raid controller H700 1GB cache NV\n>> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n>> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\n>> barriers)\n>> Raid controller H800 1GB cache nv\n>> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n>> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n>>\n>> Postgres DB is the same in both servers. This DB has 170GB size with some\n>> tables partitioned by date with a trigger. In both shared_buffers,\n>> checkpoint_segments... settings are similar because RAM is similar.\n>>\n>> I supposed that, new server had to be faster than old, because have more\n>> disk in RAID10 and two RAID controllers with more cache memory, but really\n>> I'm not obtaining the expected results\n>\n> What does\n>\n> sysctl -n vm.zone_reclaim_mode\n>\n> say? If it says 1, change it to 0:\n>\n> sysctl -w zone_reclaim_mode=0\n\nThat should be:\n\nsysctl -w vm.zone_reclaim_mode=0\n",
"msg_date": "Tue, 3 Apr 2012 09:34:30 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "Yes, setting is the same in both machines.\n\nThe results of bonnie++ running without arguments are:\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\ncltbbdd01 126G 94 99 202873 99 208327 95 1639 91 819392 88\n 2131 139\nLatency 88144us 228ms 338ms 171ms 147ms\n20325us\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\nfiles:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\ncltbbdd01 16 8063 26 +++++ +++ 27361 96 31437 96 +++++ +++ +++++\n+++\nLatency 7850us 2290us 2310us 530us 11us\n522us\n\nWith DD, one core of CPU put at 100% and results are about 100-170 MBps,\nthat I thing is bad result for this HW:\n\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100\n100+0 records in\n100+0 records out\n838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s\n\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync\n1000+0 records in\n1000+0 records out\n8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s\n\ndd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync\n1024+0 records in\n1024+0 records out\n1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s\n\nWhen monitor I/O activity with iostat, during dd, I have noticed that, if\nthe test takes 10 second, the disk have activity only during last 3 or 4\nseconds and iostat report about 250-350MBps. Is it normal?\n\nI set read ahead to different values, but the results don't differ\nsubstantially...\n\nThanks!\n\nEl 3 de abril de 2012 15:21, Tomas Vondra <[email protected]> escribió:\n\n> On 3.4.2012 14:59, Cesar Martin wrote:\n> > Hi Mike,\n> > Thank you for your fast response.\n> >\n> > blockdev --getra /dev/sdc\n> > 256\n>\n> That's way too low. Is this setting the same on both machines?\n>\n> Anyway, set it to 4096, 8192 or even 16384 and check the difference.\n>\n> BTW explain analyze is nice, but it's only half the info, especially\n> when the issue is outside PostgreSQL (hw, OS, ...). Please, provide\n> samples from iostat / vmstat or tools like that.\n>\n> Tomas\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nYes, setting is the same in both machines. The results of bonnie++ running without arguments are:\nVersion 1.96 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ncltbbdd01 126G 94 99 202873 99 208327 95 1639 91 819392 88 2131 139Latency 88144us 228ms 338ms 171ms 147ms 20325us ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CPcltbbdd01 16 8063 26 +++++ +++ 27361 96 31437 96 +++++ +++ +++++ +++\nLatency 7850us 2290us 2310us 530us 11us 522usWith DD, one core of CPU put at 100% and results are about 100-170 MBps, that I thing is bad result for this HW:\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100100+0 records in100+0 records out838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync1000+0 records in1000+0 records out8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s\ndd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync1024+0 records in1024+0 records out1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s\nWhen monitor I/O activity with iostat, during dd, I have noticed that, if the test takes 10 second, the disk have activity only during last 3 or 4 seconds and iostat report about 250-350MBps. Is it normal?\nI set read ahead to different values, but the results don't differ substantially...Thanks!El 3 de abril de 2012 15:21, Tomas Vondra <[email protected]> escribió:\nOn 3.4.2012 14:59, Cesar Martin wrote:\n> Hi Mike,\n> Thank you for your fast response.\n>\n> blockdev --getra /dev/sdc\n> 256\n\nThat's way too low. Is this setting the same on both machines?\n\nAnyway, set it to 4096, 8192 or even 16384 and check the difference.\n\nBTW explain analyze is nice, but it's only half the info, especially\nwhen the issue is outside PostgreSQL (hw, OS, ...). Please, provide\nsamples from iostat / vmstat or tools like that.\n\nTomas\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- César Martín Pé[email protected]",
"msg_date": "Tue, 3 Apr 2012 17:42:34 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "OK Scott. I go to change this kernel parameter and will repeat the tests.\nTanks!\n\nEl 3 de abril de 2012 17:34, Scott Marlowe <[email protected]>escribió:\n\n> On Tue, Apr 3, 2012 at 9:32 AM, Scott Marlowe <[email protected]>\n> wrote:\n> > On Tue, Apr 3, 2012 at 6:20 AM, Cesar Martin <[email protected]> wrote:\n> >> Hello there,\n> >>\n> >> I am having performance problem with new DELL server. Actually I have\n> this\n> >> two servers\n> >>\n> >> Server A (old - production)\n> >> -----------------\n> >> 2xCPU Six-Core AMD Opteron 2439 SE\n> >> 64GB RAM\n> >> Raid controller Perc6 512MB cache NV\n> >> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no\n> barriers)\n> >> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n> >>\n> >> Server B (new)\n> >> ------------------\n> >> 2xCPU 16 Core AMD Opteron 6282 SE\n> >> 64GB RAM\n> >> Raid controller H700 1GB cache NV\n> >> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n> >> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096,\n> no\n> >> barriers)\n> >> Raid controller H800 1GB cache nv\n> >> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n> >> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n> >>\n> >> Postgres DB is the same in both servers. This DB has 170GB size with\n> some\n> >> tables partitioned by date with a trigger. In both shared_buffers,\n> >> checkpoint_segments... settings are similar because RAM is similar.\n> >>\n> >> I supposed that, new server had to be faster than old, because have more\n> >> disk in RAID10 and two RAID controllers with more cache memory, but\n> really\n> >> I'm not obtaining the expected results\n> >\n> > What does\n> >\n> > sysctl -n vm.zone_reclaim_mode\n> >\n> > say? If it says 1, change it to 0:\n> >\n> > sysctl -w zone_reclaim_mode=0\n>\n> That should be:\n>\n> sysctl -w vm.zone_reclaim_mode=0\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nOK Scott. I go to change this kernel parameter and will repeat the tests.Tanks!El 3 de abril de 2012 17:34, Scott Marlowe <[email protected]> escribió:\n\nOn Tue, Apr 3, 2012 at 9:32 AM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Apr 3, 2012 at 6:20 AM, Cesar Martin <[email protected]> wrote:\n>> Hello there,\n>>\n>> I am having performance problem with new DELL server. Actually I have this\n>> two servers\n>>\n>> Server A (old - production)\n>> -----------------\n>> 2xCPU Six-Core AMD Opteron 2439 SE\n>> 64GB RAM\n>> Raid controller Perc6 512MB cache NV\n>> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers)\n>> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n>>\n>> Server B (new)\n>> ------------------\n>> 2xCPU 16 Core AMD Opteron 6282 SE\n>> 64GB RAM\n>> Raid controller H700 1GB cache NV\n>> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n>> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\n>> barriers)\n>> Raid controller H800 1GB cache nv\n>> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n>> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n>>\n>> Postgres DB is the same in both servers. This DB has 170GB size with some\n>> tables partitioned by date with a trigger. In both shared_buffers,\n>> checkpoint_segments... settings are similar because RAM is similar.\n>>\n>> I supposed that, new server had to be faster than old, because have more\n>> disk in RAID10 and two RAID controllers with more cache memory, but really\n>> I'm not obtaining the expected results\n>\n> What does\n>\n> sysctl -n vm.zone_reclaim_mode\n>\n> say? If it says 1, change it to 0:\n>\n> sysctl -w zone_reclaim_mode=0\n\nThat should be:\n\nsysctl -w vm.zone_reclaim_mode=0\n-- César Martín Pé[email protected]",
"msg_date": "Tue, 3 Apr 2012 17:50:14 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "OK Scott. I go to change this kernel parameter and will repeat the tests.\nTanks!\n\nEl 3 de abril de 2012 17:34, Scott Marlowe <[email protected]>escribió:\n\n> On Tue, Apr 3, 2012 at 9:32 AM, Scott Marlowe <[email protected]>\n> wrote:\n> > On Tue, Apr 3, 2012 at 6:20 AM, Cesar Martin <[email protected]> wrote:\n> >> Hello there,\n> >>\n> >> I am having performance problem with new DELL server. Actually I have\n> this\n> >> two servers\n> >>\n> >> Server A (old - production)\n> >> -----------------\n> >> 2xCPU Six-Core AMD Opteron 2439 SE\n> >> 64GB RAM\n> >> Raid controller Perc6 512MB cache NV\n> >> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no\n> barriers)\n> >> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n> >>\n> >> Server B (new)\n> >> ------------------\n> >> 2xCPU 16 Core AMD Opteron 6282 SE\n> >> 64GB RAM\n> >> Raid controller H700 1GB cache NV\n> >> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n> >> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096,\n> no\n> >> barriers)\n> >> Raid controller H800 1GB cache nv\n> >> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n> >> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n> >>\n> >> Postgres DB is the same in both servers. This DB has 170GB size with\n> some\n> >> tables partitioned by date with a trigger. In both shared_buffers,\n> >> checkpoint_segments... settings are similar because RAM is similar.\n> >>\n> >> I supposed that, new server had to be faster than old, because have more\n> >> disk in RAID10 and two RAID controllers with more cache memory, but\n> really\n> >> I'm not obtaining the expected results\n> >\n> > What does\n> >\n> > sysctl -n vm.zone_reclaim_mode\n> >\n> > say? If it says 1, change it to 0:\n> >\n> > sysctl -w zone_reclaim_mode=0\n>\n> That should be:\n>\n> sysctl -w vm.zone_reclaim_mode=0\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nOK Scott. I go to change this kernel parameter and will repeat the tests.Tanks!El 3 de abril de 2012 17:34, Scott Marlowe <[email protected]> escribió:\n\nOn Tue, Apr 3, 2012 at 9:32 AM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Apr 3, 2012 at 6:20 AM, Cesar Martin <[email protected]> wrote:\n>> Hello there,\n>>\n>> I am having performance problem with new DELL server. Actually I have this\n>> two servers\n>>\n>> Server A (old - production)\n>> -----------------\n>> 2xCPU Six-Core AMD Opteron 2439 SE\n>> 64GB RAM\n>> Raid controller Perc6 512MB cache NV\n>> - 2 HD 146GB SAS 15Krpm RAID1 (SO Centos 5.4 y pg_xlog) (XFS no barriers)\n>> - 6 HD 300GB SAS 15Krpm RAID10 (DB Postgres 8.3.9) (XFS no barriers)\n>>\n>> Server B (new)\n>> ------------------\n>> 2xCPU 16 Core AMD Opteron 6282 SE\n>> 64GB RAM\n>> Raid controller H700 1GB cache NV\n>> - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (SO Centos 6.2)\n>> - 4HD 146GB SAS 15Krpm RAID10 stripe 16k XFS (pg_xlog) (ext4 bs 4096, no\n>> barriers)\n>> Raid controller H800 1GB cache nv\n>> - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (DB Postgres 8.3.18)\n>> (ext4 bs 4096, stride 64, stripe-width 384, no barriers)\n>>\n>> Postgres DB is the same in both servers. This DB has 170GB size with some\n>> tables partitioned by date with a trigger. In both shared_buffers,\n>> checkpoint_segments... settings are similar because RAM is similar.\n>>\n>> I supposed that, new server had to be faster than old, because have more\n>> disk in RAID10 and two RAID controllers with more cache memory, but really\n>> I'm not obtaining the expected results\n>\n> What does\n>\n> sysctl -n vm.zone_reclaim_mode\n>\n> say? If it says 1, change it to 0:\n>\n> sysctl -w zone_reclaim_mode=0\n\nThat should be:\n\nsysctl -w vm.zone_reclaim_mode=0\n-- César Martín Pé[email protected]",
"msg_date": "Tue, 3 Apr 2012 17:51:59 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 3.4.2012 17:42, Cesar Martin wrote:\n> Yes, setting is the same in both machines. \n> \n> The results of bonnie++ running without arguments are:\n> \n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> cltbbdd01 126G 94 99 202873 99 208327 95 1639 91 819392 88\n> 2131 139\n> Latency 88144us 228ms 338ms 171ms 147ms \n> 20325us\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> cltbbdd01 16 8063 26 +++++ +++ 27361 96 31437 96 +++++ +++\n> +++++ +++\n> Latency 7850us 2290us 2310us 530us 11us \n> 522us\n> \n> With DD, one core of CPU put at 100% and results are about 100-170\n> MBps, that I thing is bad result for this HW:\n> \n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100\n> 100+0 records in\n> 100+0 records out\n> 838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s\n> \n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync\n> 1000+0 records in\n> 1000+0 records out\n> 8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s\n> \n> dd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync\n> 1024+0 records in\n> 1024+0 records out\n> 1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s\n> \n> When monitor I/O activity with iostat, during dd, I have noticed that,\n> if the test takes 10 second, the disk have activity only during last 3\n> or 4 seconds and iostat report about 250-350MBps. Is it normal?\n\nWell, you're testing writing, and the default behavior is to write the\ndata into page cache. And you do have 64GB of RAM so the write cache may\ntake large portion of the RAM - even gigabytes. To really test the I/O\nyou need to (a) write about 2x the amount of RAM or (b) tune the\ndirty_ratio/dirty_background_ratio accordingly.\n\nBTW what are you trying to achieve with \"conv=fdatasync\" at the end. My\ndd man page does not mention 'fdatasync' and IMHO it's a mistake on your\nside. If you want to sync the data at the end, then you need to do\nsomething like\n\n time sh -c \"dd ... && sync\"\n\n> I set read ahead to different values, but the results don't differ\n> substantially...\n\nBecause read-ahead is for reading (which is what a SELECT does most of\nthe time), but the dests above are writing to the device. And writing is\nnot influenced by read-ahead.\n\nTo test reading, do this:\n\n dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1024\n\nTomas\n",
"msg_date": "Tue, 03 Apr 2012 20:01:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 1:01 PM, Tomas Vondra <[email protected]> wrote:\n> On 3.4.2012 17:42, Cesar Martin wrote:\n>> Yes, setting is the same in both machines.\n>>\n>> The results of bonnie++ running without arguments are:\n>>\n>> Version 1.96 ------Sequential Output------ --Sequential Input-\n>> --Random-\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> /sec %CP\n>> cltbbdd01 126G 94 99 202873 99 208327 95 1639 91 819392 88\n>> 2131 139\n>> Latency 88144us 228ms 338ms 171ms 147ms\n>> 20325us\n>> ------Sequential Create------ --------Random\n>> Create--------\n>> -Create-- --Read--- -Delete-- -Create-- --Read---\n>> -Delete--\n>> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>> /sec %CP\n>> cltbbdd01 16 8063 26 +++++ +++ 27361 96 31437 96 +++++ +++\n>> +++++ +++\n>> Latency 7850us 2290us 2310us 530us 11us\n>> 522us\n>>\n>> With DD, one core of CPU put at 100% and results are about 100-170\n>> MBps, that I thing is bad result for this HW:\n>>\n>> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100\n>> 100+0 records in\n>> 100+0 records out\n>> 838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s\n>>\n>> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync\n>> 1000+0 records in\n>> 1000+0 records out\n>> 8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s\n>>\n>> dd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync\n>> 1024+0 records in\n>> 1024+0 records out\n>> 1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s\n>>\n>> When monitor I/O activity with iostat, during dd, I have noticed that,\n>> if the test takes 10 second, the disk have activity only during last 3\n>> or 4 seconds and iostat report about 250-350MBps. Is it normal?\n>\n> Well, you're testing writing, and the default behavior is to write the\n> data into page cache. And you do have 64GB of RAM so the write cache may\n> take large portion of the RAM - even gigabytes. To really test the I/O\n> you need to (a) write about 2x the amount of RAM or (b) tune the\n> dirty_ratio/dirty_background_ratio accordingly.\n>\n> BTW what are you trying to achieve with \"conv=fdatasync\" at the end. My\n> dd man page does not mention 'fdatasync' and IMHO it's a mistake on your\n> side. If you want to sync the data at the end, then you need to do\n> something like\n>\n> time sh -c \"dd ... && sync\"\n>\n>> I set read ahead to different values, but the results don't differ\n>> substantially...\n>\n> Because read-ahead is for reading (which is what a SELECT does most of\n> the time), but the dests above are writing to the device. And writing is\n> not influenced by read-ahead.\n\nYeah, but I have to agree with Cesar -- that's pretty unspectacular\nresults for 12 drive sas array to say the least (unless the way dd was\nbeing run was throwing it off somehow). Something is definitely not\nright here. Maybe we can see similar tests run on the production\nserver as a point of comparison?\n\nmerlin\n",
"msg_date": "Tue, 3 Apr 2012 13:44:51 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "Hello,\n\nYesterday I changed the kernel setting, that said\nScott, vm.zone_reclaim_mode = 0. I have done new benchmarks and I have\nnoticed changes at least in Postgres:\n\nFirst exec:\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\nrows=6765779 width=323) (actual time=0.020..7984.707 rows=6765779 loops=1)\n Total runtime: 12699.008 ms\n(2 filas)\n\nSecond:\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\nrows=6765779 width=323) (actual time=0.023..1767.440 rows=6765779 loops=1)\n Total runtime: 2696.901 ms\n\nIt seems that now data is being cached right...\n\nThe large query in first exec takes 80 seconds and in second exec takes\naround 23 seconds. This is not spectacular but is better than yesterday.\n\nFurthermore the results of dd are strange:\n\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 803,738 s, 171 MB/s\n\n171 MB/s I think is bad value for 12 SAS RAID10... And when I execute\niostat during the dd execution i obtain results like:\nsdc 1514,62 0,01 108,58 11 117765\nsdc 3705,50 0,01 316,62 0 633\nsdc 2,00 0,00 0,05 0 0\nsdc 920,00 0,00 63,49 0 126\nsdc 8322,50 0,03 712,00 0 1424\nsdc 6662,50 0,02 568,53 0 1137\nsdc 0,00 0,00 0,00 0 0\nsdc 1,50 0,00 0,04 0 0\nsdc 6413,00 0,01 412,28 0 824\nsdc 13107,50 0,03 867,94 0 1735\nsdc 0,00 0,00 0,00 0 0\nsdc 1,50 0,00 0,03 0 0\nsdc 9719,00 0,03 815,49 0 1630\nsdc 2817,50 0,01 272,51 0 545\nsdc 1,50 0,00 0,05 0 0\nsdc 1181,00 0,00 71,49 0 142\nsdc 7225,00 0,01 362,56 0 725\nsdc 2973,50 0,01 269,97 0 539\n\nI don't understand why MB_wrtn/s go from 0 to near 800MB/s constantly\nduring execution.\n\nRead results:\n\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 257,626 s, 533 MB/s\n\nsdc 3157,00 392,69 0,00 785 0\nsdc 3481,00 432,75 0,00 865 0\nsdc 2669,50 331,50 0,00 663 0\nsdc 3725,50 463,75 0,00 927 0\nsdc 2998,50 372,38 0,00 744 0\nsdc 3600,50 448,00 0,00 896 0\nsdc 3588,00 446,50 0,00 893 0\nsdc 3494,00 434,50 0,00 869 0\nsdc 3141,50 390,62 0,00 781 0\nsdc 3667,50 456,62 0,00 913 0\nsdc 3429,35 426,18 0,00 856 0\nsdc 3043,50 378,06 0,00 756 0\nsdc 3366,00 417,94 0,00 835 0\nsdc 3480,50 432,62 0,00 865 0\nsdc 3523,50 438,06 0,00 876 0\nsdc 3554,50 441,88 0,00 883 0\nsdc 3635,00 452,19 0,00 904 0\nsdc 3107,00 386,20 0,00 772 0\nsdc 3695,00 460,00 0,00 920 0\nsdc 3475,50 432,11 0,00 864 0\nsdc 3487,50 433,50 0,00 867 0\nsdc 3232,50 402,39 0,00 804 0\nsdc 3698,00 460,67 0,00 921 0\nsdc 5059,50 632,00 0,00 1264 0\nsdc 3934,00 489,56 0,00 979 0\nsdc 4536,50 566,75 0,00 1133 0\nsdc 5298,00 662,12 0,00 1324 0\n\nHere results I think that are more logical. Read speed is maintained along\nall the test...\n\nAbout the parameter \"conv=fdatasync\" that mention Tomas, I saw it at\nhttp://romanrm.ru/en/dd-benchmark and I started to use but is possible\nwrong. Before I used time sh -c \"dd if=/dev/zero of=ddfile bs=X count=Y &&\nsync\".\n\nWhat is your opinion about the results??\n\nI have noticed that since I changed the setting vm.zone_reclaim_mode = 0,\nswap is totally full. Do you recommend me disable swap?\n\nThanks!!\n\nEl 3 de abril de 2012 20:01, Tomas Vondra <[email protected]> escribió:\n\n> On 3.4.2012 17:42, Cesar Martin wrote:\n> > Yes, setting is the same in both machines.\n> >\n> > The results of bonnie++ running without arguments are:\n> >\n> > Version 1.96 ------Sequential Output------ --Sequential Input-\n> > --Random-\n> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> > --Seeks--\n> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> > /sec %CP\n> > cltbbdd01 126G 94 99 202873 99 208327 95 1639 91 819392 88\n> > 2131 139\n> > Latency 88144us 228ms 338ms 171ms 147ms\n> > 20325us\n> > ------Sequential Create------ --------Random\n> > Create--------\n> > -Create-- --Read--- -Delete-- -Create-- --Read---\n> > -Delete--\n> > files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> > /sec %CP\n> > cltbbdd01 16 8063 26 +++++ +++ 27361 96 31437 96 +++++ +++\n> > +++++ +++\n> > Latency 7850us 2290us 2310us 530us 11us\n> > 522us\n> >\n> > With DD, one core of CPU put at 100% and results are about 100-170\n> > MBps, that I thing is bad result for this HW:\n> >\n> > dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100\n> > 100+0 records in\n> > 100+0 records out\n> > 838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s\n> >\n> > dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync\n> > 1000+0 records in\n> > 1000+0 records out\n> > 8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s\n> >\n> > dd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync\n> > 1024+0 records in\n> > 1024+0 records out\n> > 1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s\n> >\n> > When monitor I/O activity with iostat, during dd, I have noticed that,\n> > if the test takes 10 second, the disk have activity only during last 3\n> > or 4 seconds and iostat report about 250-350MBps. Is it normal?\n>\n> Well, you're testing writing, and the default behavior is to write the\n> data into page cache. And you do have 64GB of RAM so the write cache may\n> take large portion of the RAM - even gigabytes. To really test the I/O\n> you need to (a) write about 2x the amount of RAM or (b) tune the\n> dirty_ratio/dirty_background_ratio accordingly.\n>\n> BTW what are you trying to achieve with \"conv=fdatasync\" at the end. My\n> dd man page does not mention 'fdatasync' and IMHO it's a mistake on your\n> side. If you want to sync the data at the end, then you need to do\n> something like\n>\n> time sh -c \"dd ... && sync\"\n>\n> > I set read ahead to different values, but the results don't differ\n> > substantially...\n>\n> Because read-ahead is for reading (which is what a SELECT does most of\n> the time), but the dests above are writing to the device. And writing is\n> not influenced by read-ahead.\n>\n> To test reading, do this:\n>\n> dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1024\n>\n> Tomas\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nHello,Yesterday I changed the kernel setting, that said Scott, vm.zone_reclaim_mode = 0. I have done new benchmarks and I have noticed changes at least in Postgres:First exec:\nEXPLAIN ANALYZE SELECT * from company_news_internet_201111; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.020..7984.707 rows=6765779 loops=1)\n Total runtime: 12699.008 ms(2 filas)Second:EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on company_news_internet_201111 (cost=0.00..369577.79 rows=6765779 width=323) (actual time=0.023..1767.440 rows=6765779 loops=1) Total runtime: 2696.901 ms\nIt seems that now data is being cached right...The large query in first exec takes 80 seconds and in second exec takes around 23 seconds. This is not spectacular but is better than yesterday.\nFurthermore the results of dd are strange:dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 803,738 s, 171 MB/s\n171 MB/s I think is bad value for 12 SAS RAID10... And when I execute iostat during the dd execution i obtain results like:sdc 1514,62 0,01 108,58 11 117765\nsdc 3705,50 0,01 316,62 0 633sdc 2,00 0,00 0,05 0 0sdc 920,00 0,00 63,49 0 126\nsdc 8322,50 0,03 712,00 0 1424sdc 6662,50 0,02 568,53 0 1137sdc 0,00 0,00 0,00 0 0\nsdc 1,50 0,00 0,04 0 0sdc 6413,00 0,01 412,28 0 824sdc 13107,50 0,03 867,94 0 1735\nsdc 0,00 0,00 0,00 0 0sdc 1,50 0,00 0,03 0 0sdc 9719,00 0,03 815,49 0 1630\nsdc 2817,50 0,01 272,51 0 545sdc 1,50 0,00 0,05 0 0sdc 1181,00 0,00 71,49 0 142\nsdc 7225,00 0,01 362,56 0 725sdc 2973,50 0,01 269,97 0 539I don't understand why MB_wrtn/s go from 0 to near 800MB/s constantly during execution.\nRead results:dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 257,626 s, 533 MB/s\nsdc 3157,00 392,69 0,00 785 0sdc 3481,00 432,75 0,00 865 0sdc 2669,50 331,50 0,00 663 0\nsdc 3725,50 463,75 0,00 927 0sdc 2998,50 372,38 0,00 744 0sdc 3600,50 448,00 0,00 896 0\nsdc 3588,00 446,50 0,00 893 0sdc 3494,00 434,50 0,00 869 0sdc 3141,50 390,62 0,00 781 0\nsdc 3667,50 456,62 0,00 913 0sdc 3429,35 426,18 0,00 856 0sdc 3043,50 378,06 0,00 756 0\nsdc 3366,00 417,94 0,00 835 0sdc 3480,50 432,62 0,00 865 0sdc 3523,50 438,06 0,00 876 0\nsdc 3554,50 441,88 0,00 883 0sdc 3635,00 452,19 0,00 904 0sdc 3107,00 386,20 0,00 772 0\nsdc 3695,00 460,00 0,00 920 0sdc 3475,50 432,11 0,00 864 0sdc 3487,50 433,50 0,00 867 0\nsdc 3232,50 402,39 0,00 804 0sdc 3698,00 460,67 0,00 921 0sdc 5059,50 632,00 0,00 1264 0\nsdc 3934,00 489,56 0,00 979 0sdc 4536,50 566,75 0,00 1133 0sdc 5298,00 662,12 0,00 1324 0\nHere results I think that are more logical. Read speed is maintained along all the test...About the parameter \"conv=fdatasync\" that mention Tomas, I saw it at http://romanrm.ru/en/dd-benchmark and I started to use but is possible wrong. Before I used time sh -c \"dd if=/dev/zero of=ddfile bs=X count=Y && sync\".\nWhat is your opinion about the results??I have noticed that since I changed the setting vm.zone_reclaim_mode = 0, swap is totally full. Do you recommend me disable swap?\nThanks!!El 3 de abril de 2012 20:01, Tomas Vondra <[email protected]> escribió:\nOn 3.4.2012 17:42, Cesar Martin wrote:\n> Yes, setting is the same in both machines.\n>\n> The results of bonnie++ running without arguments are:\n>\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> cltbbdd01 126G 94 99 202873 99 208327 95 1639 91 819392 88\n> 2131 139\n> Latency 88144us 228ms 338ms 171ms 147ms\n> 20325us\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> cltbbdd01 16 8063 26 +++++ +++ 27361 96 31437 96 +++++ +++\n> +++++ +++\n> Latency 7850us 2290us 2310us 530us 11us\n> 522us\n>\n> With DD, one core of CPU put at 100% and results are about 100-170\n> MBps, that I thing is bad result for this HW:\n>\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=100\n> 100+0 records in\n> 100+0 records out\n> 838860800 bytes (839 MB) copied, 8,1822 s, 103 MB/s\n>\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1000 conv=fdatasync\n> 1000+0 records in\n> 1000+0 records out\n> 8388608000 bytes (8,4 GB) copied, 50,8388 s, 165 MB/s\n>\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=1M count=1024 conv=fdatasync\n> 1024+0 records in\n> 1024+0 records out\n> 1073741824 bytes (1,1 GB) copied, 7,39628 s, 145 MB/s\n>\n> When monitor I/O activity with iostat, during dd, I have noticed that,\n> if the test takes 10 second, the disk have activity only during last 3\n> or 4 seconds and iostat report about 250-350MBps. Is it normal?\n\nWell, you're testing writing, and the default behavior is to write the\ndata into page cache. And you do have 64GB of RAM so the write cache may\ntake large portion of the RAM - even gigabytes. To really test the I/O\nyou need to (a) write about 2x the amount of RAM or (b) tune the\ndirty_ratio/dirty_background_ratio accordingly.\n\nBTW what are you trying to achieve with \"conv=fdatasync\" at the end. My\ndd man page does not mention 'fdatasync' and IMHO it's a mistake on your\nside. If you want to sync the data at the end, then you need to do\nsomething like\n\n time sh -c \"dd ... && sync\"\n\n> I set read ahead to different values, but the results don't differ\n> substantially...\n\nBecause read-ahead is for reading (which is what a SELECT does most of\nthe time), but the dests above are writing to the device. And writing is\nnot influenced by read-ahead.\n\nTo test reading, do this:\n\n dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1024\n\nTomas\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- César Martín Pé[email protected]",
"msg_date": "Wed, 4 Apr 2012 11:42:20 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 3:42 AM, Cesar Martin <[email protected]> wrote:\n>\n> I have noticed that since I changed the setting vm.zone_reclaim_mode = 0,\n> swap is totally full. Do you recommend me disable swap?\n\nYes\n",
"msg_date": "Wed, 4 Apr 2012 07:15:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 4.4.2012 15:15, Scott Marlowe wrote:\n> On Wed, Apr 4, 2012 at 3:42 AM, Cesar Martin <[email protected]> wrote:\n>>\n>> I have noticed that since I changed the setting vm.zone_reclaim_mode = 0,\n>> swap is totally full. Do you recommend me disable swap?\n> \n> Yes\n\nCareful about that - it depends on how you disable it.\n\nSetting 'vm.swappiness = 0' is a good idea, don't remove the swap (I've\nbeen bitten by the vm.overcommit=2 without a swap repeatedly).\n\nT.\n",
"msg_date": "Wed, 04 Apr 2012 15:20:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 7:20 AM, Tomas Vondra <[email protected]> wrote:\n> On 4.4.2012 15:15, Scott Marlowe wrote:\n>> On Wed, Apr 4, 2012 at 3:42 AM, Cesar Martin <[email protected]> wrote:\n>>>\n>>> I have noticed that since I changed the setting vm.zone_reclaim_mode = 0,\n>>> swap is totally full. Do you recommend me disable swap?\n>>\n>> Yes\n>\n> Careful about that - it depends on how you disable it.\n>\n> Setting 'vm.swappiness = 0' is a good idea, don't remove the swap (I've\n> been bitten by the vm.overcommit=2 without a swap repeatedly).\n\nI've had far more problems with swap on and swappiness set to 0 than\nwith swap off. But this has always been on large memory machines with\n64 to 256G memory. Even with fairly late model linux kernels (i.e.\n10.04 LTS through 11.04) I've watched the kswapd start up swapping\nhard on a machine with zero memory pressure and no need for swap.\nTook about 2 weeks of hard running before kswapd decided to act\npathological.\n\nSeen it with swap on, with swappiness to 0, and overcommit to either 0\nor 2 on big machines. Once we just took the swap partitions away it\nthe machines ran fine.\n",
"msg_date": "Wed, 4 Apr 2012 10:22:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 1:22 PM, Scott Marlowe <[email protected]> wrote:\n> Even with fairly late model linux kernels (i.e.\n> 10.04 LTS through 11.04) I've watched the kswapd start up swapping\n> hard on a machine with zero memory pressure and no need for swap.\n> Took about 2 weeks of hard running before kswapd decided to act\n> pathological.\n\nPerhaps you had some overfull partitions in tmpfs?\n",
"msg_date": "Wed, 4 Apr 2012 13:28:44 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 10:28 AM, Claudio Freire <[email protected]> wrote:\n> On Wed, Apr 4, 2012 at 1:22 PM, Scott Marlowe <[email protected]> wrote:\n>> Even with fairly late model linux kernels (i.e.\n>> 10.04 LTS through 11.04) I've watched the kswapd start up swapping\n>> hard on a machine with zero memory pressure and no need for swap.\n>> Took about 2 weeks of hard running before kswapd decided to act\n>> pathological.\n>\n> Perhaps you had some overfull partitions in tmpfs?\n\nNope. Didn't use tmpfs for anything on that machine. Stock Ubuntu\n10.04 with Postgres just doing simple but high traffic postgres stuff.\n",
"msg_date": "Wed, 4 Apr 2012 10:31:28 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 10:31 AM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Apr 4, 2012 at 10:28 AM, Claudio Freire <[email protected]> wrote:\n>> On Wed, Apr 4, 2012 at 1:22 PM, Scott Marlowe <[email protected]> wrote:\n>>> Even with fairly late model linux kernels (i.e.\n>>> 10.04 LTS through 11.04) I've watched the kswapd start up swapping\n>>> hard on a machine with zero memory pressure and no need for swap.\n>>> Took about 2 weeks of hard running before kswapd decided to act\n>>> pathological.\n>>\n>> Perhaps you had some overfull partitions in tmpfs?\n>\n> Nope. Didn't use tmpfs for anything on that machine. Stock Ubuntu\n> 10.04 with Postgres just doing simple but high traffic postgres stuff.\n\nJust to clarify, the machine had 128G RAM and about 95G of it was\nkernel cache, the rest used by shared memory (set to 4G) and\npostgresql.\n",
"msg_date": "Wed, 4 Apr 2012 10:42:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 4.4.2012 18:22, Scott Marlowe wrote:\n> On Wed, Apr 4, 2012 at 7:20 AM, Tomas Vondra <[email protected]> wrote:\n>> On 4.4.2012 15:15, Scott Marlowe wrote:\n>>> On Wed, Apr 4, 2012 at 3:42 AM, Cesar Martin <[email protected]> wrote:\n>>>>\n>>>> I have noticed that since I changed the setting vm.zone_reclaim_mode = 0,\n>>>> swap is totally full. Do you recommend me disable swap?\n>>>\n>>> Yes\n>>\n>> Careful about that - it depends on how you disable it.\n>>\n>> Setting 'vm.swappiness = 0' is a good idea, don't remove the swap (I've\n>> been bitten by the vm.overcommit=2 without a swap repeatedly).\n> \n> I've had far more problems with swap on and swappiness set to 0 than\n> with swap off. But this has always been on large memory machines with\n> 64 to 256G memory. Even with fairly late model linux kernels (i.e.\n> 10.04 LTS through 11.04) I've watched the kswapd start up swapping\n> hard on a machine with zero memory pressure and no need for swap.\n> Took about 2 weeks of hard running before kswapd decided to act\n> pathological.\n> \n> Seen it with swap on, with swappiness to 0, and overcommit to either 0\n> or 2 on big machines. Once we just took the swap partitions away it\n> the machines ran fine.\n\nI've experienced the issues in exactly the opposite case - machines with\nvery little memory (like a VPS with 512MB of RAM). I did want to operate\nthat machine without a swap yet it kept failing because of OOM errors or\npanicking (depending on the overcommit ratio value).\n\nTurns out it's quite difficult (~ almost impossible) tune the VM for a\nswap-less case. In the end I've just added a 256MB of swap and\neverything started to work fine - funny thing is the swap is not used at\nall (according to sar).\n\nT.\n",
"msg_date": "Wed, 04 Apr 2012 18:54:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 4:42 AM, Cesar Martin <[email protected]> wrote:\n> Hello,\n>\n> Yesterday I changed the kernel setting, that said\n> Scott, vm.zone_reclaim_mode = 0. I have done new benchmarks and I have\n> noticed changes at least in Postgres:\n>\n> First exec:\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.020..7984.707 rows=6765779 loops=1)\n> Total runtime: 12699.008 ms\n> (2 filas)\n>\n> Second:\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.023..1767.440 rows=6765779 loops=1)\n> Total runtime: 2696.901 ms\n>\n> It seems that now data is being cached right...\n>\n> The large query in first exec takes 80 seconds and in second exec takes\n> around 23 seconds. This is not spectacular but is better than yesterday.\n>\n> Furthermore the results of dd are strange:\n>\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 803,738 s, 171 MB/s\n>\n> 171 MB/s I think is bad value for 12 SAS RAID10... And when I execute iostat\n> during the dd execution i obtain results like:\n> sdc 1514,62 0,01 108,58 11 117765\n> sdc 3705,50 0,01 316,62 0 633\n> sdc 2,00 0,00 0,05 0 0\n> sdc 920,00 0,00 63,49 0 126\n> sdc 8322,50 0,03 712,00 0 1424\n> sdc 6662,50 0,02 568,53 0 1137\n> sdc 0,00 0,00 0,00 0 0\n> sdc 1,50 0,00 0,04 0 0\n> sdc 6413,00 0,01 412,28 0 824\n> sdc 13107,50 0,03 867,94 0 1735\n> sdc 0,00 0,00 0,00 0 0\n> sdc 1,50 0,00 0,03 0 0\n> sdc 9719,00 0,03 815,49 0 1630\n> sdc 2817,50 0,01 272,51 0 545\n> sdc 1,50 0,00 0,05 0 0\n> sdc 1181,00 0,00 71,49 0 142\n> sdc 7225,00 0,01 362,56 0 725\n> sdc 2973,50 0,01 269,97 0 539\n>\n> I don't understand why MB_wrtn/s go from 0 to near 800MB/s constantly during\n> execution.\n\nThis is looking more and more like a a raid controller issue. ISTM\nit's bucking the cache, filling it up and flushing it synchronously.\nyour read results are ok but not what they should be IMO. Maybe it's\nan environmental issue or the card is just a straight up lemon (no\nsurprise in the dell line). Are you using standard drivers, and have\nyou checked for updates? Have you considered contacting dell support?\n\nmerlin\n",
"msg_date": "Wed, 4 Apr 2012 12:16:01 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "Raid controller issue or driver problem was the first problem that I\nstudied.\nI installed Centos 5.4 al the beginning, but I had performance problems,\nand I contacted Dell support... but Centos is not support by Dell... Then I\ninstalled Redhat 6 and we contact Dell with same problem.\nDell say that all is right and that this is a software problem.\nI have installed Centos 5.4, 6.2 and Redhat 6 with similar result, I think\nthat not is driver problem (megasas-raid kernel module).\nI will check kernel updates...\nThanks!\n\nPS. lately I'm pretty disappointed with the quality of the DELL components, is\nnot the first problem we have with hardware in new machines.\n\nEl 4 de abril de 2012 19:16, Merlin Moncure <[email protected]> escribió:\n\n> On Wed, Apr 4, 2012 at 4:42 AM, Cesar Martin <[email protected]> wrote:\n> > Hello,\n> >\n> > Yesterday I changed the kernel setting, that said\n> > Scott, vm.zone_reclaim_mode = 0. I have done new benchmarks and I have\n> > noticed changes at least in Postgres:\n> >\n> > First exec:\n> > EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n> > QUERY\n> PLAN\n> >\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> > Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> > rows=6765779 width=323) (actual time=0.020..7984.707 rows=6765779\n> loops=1)\n> > Total runtime: 12699.008 ms\n> > (2 filas)\n> >\n> > Second:\n> > EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n> > QUERY\n> PLAN\n> >\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> > Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> > rows=6765779 width=323) (actual time=0.023..1767.440 rows=6765779\n> loops=1)\n> > Total runtime: 2696.901 ms\n> >\n> > It seems that now data is being cached right...\n> >\n> > The large query in first exec takes 80 seconds and in second exec takes\n> > around 23 seconds. This is not spectacular but is better than yesterday.\n> >\n> > Furthermore the results of dd are strange:\n> >\n> > dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> > 16384+0 records in\n> > 16384+0 records out\n> > 137438953472 bytes (137 GB) copied, 803,738 s, 171 MB/s\n> >\n> > 171 MB/s I think is bad value for 12 SAS RAID10... And when I execute\n> iostat\n> > during the dd execution i obtain results like:\n> > sdc 1514,62 0,01 108,58 11 117765\n> > sdc 3705,50 0,01 316,62 0 633\n> > sdc 2,00 0,00 0,05 0 0\n> > sdc 920,00 0,00 63,49 0 126\n> > sdc 8322,50 0,03 712,00 0 1424\n> > sdc 6662,50 0,02 568,53 0 1137\n> > sdc 0,00 0,00 0,00 0 0\n> > sdc 1,50 0,00 0,04 0 0\n> > sdc 6413,00 0,01 412,28 0 824\n> > sdc 13107,50 0,03 867,94 0 1735\n> > sdc 0,00 0,00 0,00 0 0\n> > sdc 1,50 0,00 0,03 0 0\n> > sdc 9719,00 0,03 815,49 0 1630\n> > sdc 2817,50 0,01 272,51 0 545\n> > sdc 1,50 0,00 0,05 0 0\n> > sdc 1181,00 0,00 71,49 0 142\n> > sdc 7225,00 0,01 362,56 0 725\n> > sdc 2973,50 0,01 269,97 0 539\n> >\n> > I don't understand why MB_wrtn/s go from 0 to near 800MB/s constantly\n> during\n> > execution.\n>\n> This is looking more and more like a a raid controller issue. ISTM\n> it's bucking the cache, filling it up and flushing it synchronously.\n> your read results are ok but not what they should be IMO. Maybe it's\n> an environmental issue or the card is just a straight up lemon (no\n> surprise in the dell line). Are you using standard drivers, and have\n> you checked for updates? Have you considered contacting dell support?\n>\n> merlin\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nRaid controller issue or driver problem was the first problem that I studied.I installed Centos 5.4 al the beginning, but I had performance problems, and I contacted Dell support... but Centos is not support by Dell... Then I installed Redhat 6 and we contact Dell with same problem.\nDell say that all is right and that this is a software problem.I have installed Centos 5.4, 6.2 and Redhat 6 with similar result, I think that not is driver problem (megasas-raid kernel module).I will check kernel updates...\nThanks!PS. lately I'm pretty disappointed with the quality of the DELL components, is not the first problem we have with hardware in new machines.\nEl 4 de abril de 2012 19:16, Merlin Moncure <[email protected]> escribió:\nOn Wed, Apr 4, 2012 at 4:42 AM, Cesar Martin <[email protected]> wrote:\n> Hello,\n>\n> Yesterday I changed the kernel setting, that said\n> Scott, vm.zone_reclaim_mode = 0. I have done new benchmarks and I have\n> noticed changes at least in Postgres:\n>\n> First exec:\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.020..7984.707 rows=6765779 loops=1)\n> Total runtime: 12699.008 ms\n> (2 filas)\n>\n> Second:\n> EXPLAIN ANALYZE SELECT * from company_news_internet_201111;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on company_news_internet_201111 (cost=0.00..369577.79\n> rows=6765779 width=323) (actual time=0.023..1767.440 rows=6765779 loops=1)\n> Total runtime: 2696.901 ms\n>\n> It seems that now data is being cached right...\n>\n> The large query in first exec takes 80 seconds and in second exec takes\n> around 23 seconds. This is not spectacular but is better than yesterday.\n>\n> Furthermore the results of dd are strange:\n>\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 803,738 s, 171 MB/s\n>\n> 171 MB/s I think is bad value for 12 SAS RAID10... And when I execute iostat\n> during the dd execution i obtain results like:\n> sdc 1514,62 0,01 108,58 11 117765\n> sdc 3705,50 0,01 316,62 0 633\n> sdc 2,00 0,00 0,05 0 0\n> sdc 920,00 0,00 63,49 0 126\n> sdc 8322,50 0,03 712,00 0 1424\n> sdc 6662,50 0,02 568,53 0 1137\n> sdc 0,00 0,00 0,00 0 0\n> sdc 1,50 0,00 0,04 0 0\n> sdc 6413,00 0,01 412,28 0 824\n> sdc 13107,50 0,03 867,94 0 1735\n> sdc 0,00 0,00 0,00 0 0\n> sdc 1,50 0,00 0,03 0 0\n> sdc 9719,00 0,03 815,49 0 1630\n> sdc 2817,50 0,01 272,51 0 545\n> sdc 1,50 0,00 0,05 0 0\n> sdc 1181,00 0,00 71,49 0 142\n> sdc 7225,00 0,01 362,56 0 725\n> sdc 2973,50 0,01 269,97 0 539\n>\n> I don't understand why MB_wrtn/s go from 0 to near 800MB/s constantly during\n> execution.\n\nThis is looking more and more like a a raid controller issue. ISTM\nit's bucking the cache, filling it up and flushing it synchronously.\nyour read results are ok but not what they should be IMO. Maybe it's\nan environmental issue or the card is just a straight up lemon (no\nsurprise in the dell line). Are you using standard drivers, and have\nyou checked for updates? Have you considered contacting dell support?\n\nmerlin\n-- César Martín Pé[email protected]",
"msg_date": "Wed, 4 Apr 2012 20:46:33 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 12:46 PM, Cesar Martin <[email protected]> wrote:\n> Raid controller issue or driver problem was the first problem that I\n> studied.\n> I installed Centos 5.4 al the beginning, but I had performance problems, and\n> I contacted Dell support... but Centos is not support by Dell... Then I\n> installed Redhat 6 and we contact Dell with same problem.\n> Dell say that all is right and that this is a software problem.\n> I have installed Centos 5.4, 6.2 and Redhat 6 with similar result, I think\n> that not is driver problem (megasas-raid kernel module).\n> I will check kernel updates...\n> Thanks!\n\nLook for firmware updates to your RAID card.\n",
"msg_date": "Wed, 4 Apr 2012 12:55:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 1:55 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Apr 4, 2012 at 12:46 PM, Cesar Martin <[email protected]> wrote:\n>> Raid controller issue or driver problem was the first problem that I\n>> studied.\n>> I installed Centos 5.4 al the beginning, but I had performance problems, and\n>> I contacted Dell support... but Centos is not support by Dell... Then I\n>> installed Redhat 6 and we contact Dell with same problem.\n>> Dell say that all is right and that this is a software problem.\n>> I have installed Centos 5.4, 6.2 and Redhat 6 with similar result, I think\n>> that not is driver problem (megasas-raid kernel module).\n>> I will check kernel updates...\n>> Thanks!\n>\n> Look for firmware updates to your RAID card.\n\nallready checked that: look here:\nhttp://www.dell.com/support/drivers/us/en/04/DriverDetails?DriverId=R269683&FileId=2731095787&DriverName=Dell%20PERC%20H800%20Adapter%2C%20v.12.3.0-0032%2C%20A02&urlProductCode=False\n\nlatest update is july 2010. i've been down this road with dell many\ntimes and I would advise RMAing the whole server -- that will at least\nget their attention. dell performance/software support is worthless\nand it's a crying shame blowing 10 grand on a server only to have it\nunderperform your 3 year old workhorse.\n\nmerlin\n",
"msg_date": "Wed, 4 Apr 2012 13:58:58 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 4.4.2012 20:46, Cesar Martin wrote:\n> Raid controller issue or driver problem was the first problem that I\n> studied.\n> I installed Centos 5.4 al the beginning, but I had performance problems,\n> and I contacted Dell support... but Centos is not support by Dell...\n> Then I installed Redhat 6 and we contact Dell with same problem.\n> Dell say that all is right and that this is a software problem.\n> I have installed Centos 5.4, 6.2 and Redhat 6 with similar result, I\n> think that not is driver problem (megasas-raid kernel module).\n> I will check kernel updates...\n> Thanks!\n\nWell, there are different meanings of 'working'. Obviously you mean\n'gives reasonable performance' while Dell understands 'is not on fire'.\n\nIIRC H800 is just a 926x controller from LSI, so it's probably based on\nLSI 2108. Can you post basic info about the setting, i.e.\n\n MegaCli -AdpAllInfo -aALL\n\nor something like that? I'm especially interested in the access/cache\npolicies, cache drop interval .etc, i.e.\n\n MegaCli -LDGetProp (-Cache | -Access | -Name | -DskCache)\n\nWhat I'd do next is testing a much smaller array (even a single drive)\nto see if the issue exists. If it works, try to add another drive etc.\nIt's much easier to show them something's wrong. The simpler the test\ncase, the better.\n\nI've found this (it's about a 2108-based controller from LSI):\n\nhttp://www.xbitlabs.com/articles/storage/display/lsi-megaraid-sas9260-8i_3.html#sect0\n\nThe paragraphs below the diagram are interesting. Not sure if they\ndescribe the same issue you have, but maybe it's related.\n\nAnyway, it's quite usual that a RAID controller has about 50% write\nperformance compared to read performance, usually due to on-board CPU\nbottleneck. You do have ~ 530 MB/s and 170 MB/s, so it's not exactly 50%\nbut it's not very far.\n\nBut the fluctuation, that surely is strange. What are the page cache\ndirty limits, i.e.\n\ncat /proc/sys/vm/dirty_background_ratio\ncat /proc/sys/vm/dirty_ratio\n\nThat's probably #1 source I've seen responsible for such issues (on\nmachines with a lot of RAM).\n\nTomas\n\n",
"msg_date": "Wed, 04 Apr 2012 21:50:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "> From: Tomas Vondra <[email protected]>\n\n> But the fluctuation, that surely is strange. What are the page cache\n> dirty limits, i.e.\n> \n> cat /proc/sys/vm/dirty_background_ratio\n> cat /proc/sys/vm/dirty_ratio\n> \n> That's probably #1 source I've seen responsible for such issues (on\n> machines with a lot of RAM).\n> \n\n+1 on that.\n\nWe're running similar 32 core dell servers with H700s and 128Gb RAM.\n\nWith those at the defaults (I don't recall if it's 5 and 10 respectively) you're looking at 3.2Gb of dirty pages before pdflush flushes them and 6.4Gb before the process is forced to flush its self.\n\n> From: Tomas Vondra <[email protected]> > But the fluctuation, that surely is strange. What are the page cache> dirty limits, i.e.> > cat /proc/sys/vm/dirty_background_ratio> cat /proc/sys/vm/dirty_ratio> > That's probably #1 source I've seen responsible for such issues (on> machines with a lot of RAM).> +1 on that.We're running similar 32 core dell servers with H700s and 128Gb RAM.With those at the defaults (I don't recall if it's 5 and 10 respectively) you're looking at 3.2Gb of dirty pages before pdflush flushes them and 6.4Gb before the process is forced to flush its self.",
"msg_date": "Thu, 5 Apr 2012 14:10:01 +0100 (BST)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 5.4.2012 17:17, Cesar Martin wrote:\n> Well, I have installed megacli on server and attach the results in file\n> megacli.txt. Also we have \"Dell Open Manage\" install in server, that can\n> generate a log of H800. I attach to mail with name lsi_0403.\n> \n> About dirty limits, I have default values:\n> vm.dirty_background_ratio = 10\n> vm.dirty_ratio = 20\n> \n> I have compared with other server and values are the same, except in\n> centos 5.4 database production server that have vm.dirty_ratio = 40\n\nDo the other machines have the same amount of RAM? The point is that the\nvalues that work with less memory don't work that well with large\namounts of memory (and the amount of RAM did grow a lot recently).\n\nFor example a few years ago the average amount of RAM was ~8GB. In that\ncase the\n\n vm.dirty_background_ratio = 10 => 800MB\n vm.dirty_ratio = 20 => 1600MB\n\nwhich is all peachy if you have a decent controller with a write cache.\nBut turn that to 64GB and suddenly\n\n vm.dirty_background_ratio = 10 => 6.4GB\n vm.dirty_ratio = 20 => 12.8GB\n\nThe problem is that there'll be a lot of data waiting (for 30 seconds by\ndefault), and then suddenly it starts writing all of them to the\ncontroller. Such systems behave just as your system - short strokes of\nwrites interleaved with 'no activity'.\n\nGreg Smith wrote a nice howto about this - it's from 2007 but all the\nrecommendations are still valid:\n\n http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nTL;DR:\n\n - decrease the dirty_background_ratio/dirty_ratio (or use *_bytes)\n\n - consider decreasing the dirty_expire_centiseconds\n\n\nT.\n",
"msg_date": "Thu, 05 Apr 2012 17:49:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Thu, Apr 5, 2012 at 10:49 AM, Tomas Vondra <[email protected]> wrote:\n> On 5.4.2012 17:17, Cesar Martin wrote:\n>> Well, I have installed megacli on server and attach the results in file\n>> megacli.txt. Also we have \"Dell Open Manage\" install in server, that can\n>> generate a log of H800. I attach to mail with name lsi_0403.\n>>\n>> About dirty limits, I have default values:\n>> vm.dirty_background_ratio = 10\n>> vm.dirty_ratio = 20\n>>\n>> I have compared with other server and values are the same, except in\n>> centos 5.4 database production server that have vm.dirty_ratio = 40\n>\n> Do the other machines have the same amount of RAM? The point is that the\n> values that work with less memory don't work that well with large\n> amounts of memory (and the amount of RAM did grow a lot recently).\n>\n> For example a few years ago the average amount of RAM was ~8GB. In that\n> case the\n>\n> vm.dirty_background_ratio = 10 => 800MB\n> vm.dirty_ratio = 20 => 1600MB\n>\n> which is all peachy if you have a decent controller with a write cache.\n> But turn that to 64GB and suddenly\n>\n> vm.dirty_background_ratio = 10 => 6.4GB\n> vm.dirty_ratio = 20 => 12.8GB\n>\n> The problem is that there'll be a lot of data waiting (for 30 seconds by\n> default), and then suddenly it starts writing all of them to the\n> controller. Such systems behave just as your system - short strokes of\n> writes interleaved with 'no activity'.\n>\n> Greg Smith wrote a nice howto about this - it's from 2007 but all the\n> recommendations are still valid:\n>\n> http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n>\n> TL;DR:\n>\n> - decrease the dirty_background_ratio/dirty_ratio (or use *_bytes)\n>\n> - consider decreasing the dirty_expire_centiseconds\n\nThe original problem is read based performance issue though and this\nwill not have any affect on that whatsoever (although it's still\nexcellent advice). Also dd should bypass the o/s buffer cache. I\nstill pretty much convinced that there is a fundamental performance\nissue with the raid card dell needs to explain.\n\nmerlin\n",
"msg_date": "Thu, 5 Apr 2012 13:43:55 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On 5.4.2012 20:43, Merlin Moncure wrote:\n> The original problem is read based performance issue though and this\n> will not have any affect on that whatsoever (although it's still\n> excellent advice). Also dd should bypass the o/s buffer cache. I\n> still pretty much convinced that there is a fundamental performance\n> issue with the raid card dell needs to explain.\n\nWell, there are two issues IMHO.\n\n1) Read performance that's not exactly as good as one'd expect from a\n 12 x 15k SAS RAID10 array. Given that the 15k Cheetah drives usually\n give like 170 MB/s for sequential reads/writes. I'd definitely\n expect more than 533 MB/s when reading the data. At least something\n near 1GB/s (equal to 6 drives).\n\n Hmm, the dd read performance seems to grow over time - I wonder if\n this is the issue with adaptive read policy, as mentioned in the\n xbitlabs report.\n\n Cesar, can you set the read policy to a 'read ahead'\n\n megacli -LDSetProp RA -LALL -aALL\n\n or maybe 'no read-ahead'\n\n megacli -LDSetProp NORA -LALL -aALL\n\n It's worth a try, maybe it somehow conflicts with the way kernel\n handles read-ahead or something. I find these adaptive heuristics\n a bit unpredictable ...\n\n Another thing - I see the patrol reads are enabled. Can you disable\n that and try how that affects the performance?\n\n2) Write performance behaviour, that's much more suspicious ...\n\n Not sure if it's related to the read performance issues.\n\nTomas\n",
"msg_date": "Thu, 05 Apr 2012 22:06:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "Hi,\n\nToday I'm doing new benchmarks with RA, NORA, WB and WT in the controller:\n\nWith NORA\n-----------------\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 318,306 s, 432 MB/s\n\nWith RA\n------------\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 179,712 s, 765 MB/s\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 202,948 s, 677 MB/s\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 213,157 s, 645 MB/s\n\nWith Adaptative RA\n-----------------\n[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 169,533 s, 811 MB/s\n[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 207,223 s, 663 MB/s\n\nIt's very strange the differences between the same test under same\nconditions... It seems thah adaptative read ahead is the best solution.\n\nFor write test, I apply tuned-adm throughput-performance, that change IO\nelevator to deadline and grow up vm.dirty_ratio to 40.... ?¿?¿?\n\nWith WB\n-------------\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 539,041 s, 255 MB/s\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 505,695 s, 272 MB/s\n\nEnforce WB\n-----------------\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 662,538 s, 207 MB/s\n\nWith WT\n--------------\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 750,615 s, 183 MB/s\n\nI think that this results are more logical... WT results in bad performance\nand differences, inside the same test, are minimum.\n\nLater I have put pair of dd at same time:\n\ndd if=/dev/zero of=/vol02/bonnie/DD2 bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 633,613 s, 217 MB/s\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n16384+0 records in\n16384+0 records out\n137438953472 bytes (137 GB) copied, 732,759 s, 188 MB/s\n\nIs very strange, that with parallel DD I take 400MBps. It's like if Centos\nhave limit in IO throughput of a process...\n\n\nEl 5 de abril de 2012 22:06, Tomas Vondra <[email protected]> escribió:\n\n> On 5.4.2012 20:43, Merlin Moncure wrote:\n> > The original problem is read based performance issue though and this\n> > will not have any affect on that whatsoever (although it's still\n> > excellent advice). Also dd should bypass the o/s buffer cache. I\n> > still pretty much convinced that there is a fundamental performance\n> > issue with the raid card dell needs to explain.\n>\n> Well, there are two issues IMHO.\n>\n> 1) Read performance that's not exactly as good as one'd expect from a\n> 12 x 15k SAS RAID10 array. Given that the 15k Cheetah drives usually\n> give like 170 MB/s for sequential reads/writes. I'd definitely\n> expect more than 533 MB/s when reading the data. At least something\n> near 1GB/s (equal to 6 drives).\n>\n> Hmm, the dd read performance seems to grow over time - I wonder if\n> this is the issue with adaptive read policy, as mentioned in the\n> xbitlabs report.\n>\n> Cesar, can you set the read policy to a 'read ahead'\n>\n> megacli -LDSetProp RA -LALL -aALL\n>\n> or maybe 'no read-ahead'\n>\n> megacli -LDSetProp NORA -LALL -aALL\n>\n> It's worth a try, maybe it somehow conflicts with the way kernel\n> handles read-ahead or something. I find these adaptive heuristics\n> a bit unpredictable ...\n>\n> Another thing - I see the patrol reads are enabled. Can you disable\n> that and try how that affects the performance?\n>\n> 2) Write performance behaviour, that's much more suspicious ...\n>\n> Not sure if it's related to the read performance issues.\n>\n> Tomas\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nHi,Today I'm doing new benchmarks with RA, NORA, WB and WT in the controller:With NORA-----------------dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 318,306 s, 432 MB/sWith RA\n------------dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 179,712 s, 765 MB/s\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 202,948 s, 677 MB/s\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 213,157 s, 645 MB/s\nWith Adaptative RA-----------------[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out\n137438953472 bytes (137 GB) copied, 169,533 s, 811 MB/s[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in\n16384+0 records out137438953472 bytes (137 GB) copied, 207,223 s, 663 MB/s\nIt's very strange the differences between the same test under same conditions... It seems thah adaptative read ahead is the best solution.For write test, I apply tuned-adm throughput-performance, that change IO elevator to deadline and grow up vm.dirty_ratio to 40.... ?¿?¿?\nWith WB-------------dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 539,041 s, 255 MB/s\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 505,695 s, 272 MB/s\nEnforce WB-----------------dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 662,538 s, 207 MB/s\nWith WT--------------dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 750,615 s, 183 MB/s\nI think that this results are more logical... WT results in bad performance and differences, inside the same test, are minimum.Later I have put pair of dd at same time: \ndd if=/dev/zero of=/vol02/bonnie/DD2 bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 633,613 s, 217 MB/s\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 732,759 s, 188 MB/sIs very strange, that with parallel DD I take 400MBps. It's like if Centos have limit in IO throughput of a process...\nEl 5 de abril de 2012 22:06, Tomas Vondra <[email protected]> escribió:\nOn 5.4.2012 20:43, Merlin Moncure wrote:\n> The original problem is read based performance issue though and this\n> will not have any affect on that whatsoever (although it's still\n> excellent advice). Also dd should bypass the o/s buffer cache. I\n> still pretty much convinced that there is a fundamental performance\n> issue with the raid card dell needs to explain.\n\nWell, there are two issues IMHO.\n\n1) Read performance that's not exactly as good as one'd expect from a\n 12 x 15k SAS RAID10 array. Given that the 15k Cheetah drives usually\n give like 170 MB/s for sequential reads/writes. I'd definitely\n expect more than 533 MB/s when reading the data. At least something\n near 1GB/s (equal to 6 drives).\n\n Hmm, the dd read performance seems to grow over time - I wonder if\n this is the issue with adaptive read policy, as mentioned in the\n xbitlabs report.\n\n Cesar, can you set the read policy to a 'read ahead'\n\n megacli -LDSetProp RA -LALL -aALL\n\n or maybe 'no read-ahead'\n\n megacli -LDSetProp NORA -LALL -aALL\n\n It's worth a try, maybe it somehow conflicts with the way kernel\n handles read-ahead or something. I find these adaptive heuristics\n a bit unpredictable ...\n\n Another thing - I see the patrol reads are enabled. Can you disable\n that and try how that affects the performance?\n\n2) Write performance behaviour, that's much more suspicious ...\n\n Not sure if it's related to the read performance issues.\n\nTomas\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- César Martín Pé[email protected]",
"msg_date": "Mon, 9 Apr 2012 18:24:26 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "Hi,\n\nFinally the problem was BIOS configuration. DBPM had was set to \"Active\nPower Controller\" I changed this to \"Max Performance\".\nhttp://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\nNow wirite speed are 550MB/s and read 1,1GB/s.\n\nThank you all for your advice.\n\nEl 9 de abril de 2012 18:24, Cesar Martin <[email protected]> escribió:\n\n> Hi,\n>\n> Today I'm doing new benchmarks with RA, NORA, WB and WT in the controller:\n>\n> With NORA\n> -----------------\n> dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 318,306 s, 432 MB/s\n>\n> With RA\n> ------------\n> dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 179,712 s, 765 MB/s\n> dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 202,948 s, 677 MB/s\n> dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 213,157 s, 645 MB/s\n>\n> With Adaptative RA\n> -----------------\n> [root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 169,533 s, 811 MB/s\n> [root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 207,223 s, 663 MB/s\n>\n> It's very strange the differences between the same test under same\n> conditions... It seems thah adaptative read ahead is the best solution.\n>\n> For write test, I apply tuned-adm throughput-performance, that change IO\n> elevator to deadline and grow up vm.dirty_ratio to 40.... ?¿?¿?\n>\n> With WB\n> -------------\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 539,041 s, 255 MB/s\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 505,695 s, 272 MB/s\n>\n> Enforce WB\n> -----------------\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 662,538 s, 207 MB/s\n>\n> With WT\n> --------------\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 750,615 s, 183 MB/s\n>\n> I think that this results are more logical... WT results in bad\n> performance and differences, inside the same test, are minimum.\n>\n> Later I have put pair of dd at same time:\n>\n> dd if=/dev/zero of=/vol02/bonnie/DD2 bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 633,613 s, 217 MB/s\n> dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=16384\n> 16384+0 records in\n> 16384+0 records out\n> 137438953472 bytes (137 GB) copied, 732,759 s, 188 MB/s\n>\n> Is very strange, that with parallel DD I take 400MBps. It's like if Centos\n> have limit in IO throughput of a process...\n>\n>\n> El 5 de abril de 2012 22:06, Tomas Vondra <[email protected]> escribió:\n>\n> On 5.4.2012 20:43, Merlin Moncure wrote:\n>> > The original problem is read based performance issue though and this\n>> > will not have any affect on that whatsoever (although it's still\n>> > excellent advice). Also dd should bypass the o/s buffer cache. I\n>> > still pretty much convinced that there is a fundamental performance\n>> > issue with the raid card dell needs to explain.\n>>\n>> Well, there are two issues IMHO.\n>>\n>> 1) Read performance that's not exactly as good as one'd expect from a\n>> 12 x 15k SAS RAID10 array. Given that the 15k Cheetah drives usually\n>> give like 170 MB/s for sequential reads/writes. I'd definitely\n>> expect more than 533 MB/s when reading the data. At least something\n>> near 1GB/s (equal to 6 drives).\n>>\n>> Hmm, the dd read performance seems to grow over time - I wonder if\n>> this is the issue with adaptive read policy, as mentioned in the\n>> xbitlabs report.\n>>\n>> Cesar, can you set the read policy to a 'read ahead'\n>>\n>> megacli -LDSetProp RA -LALL -aALL\n>>\n>> or maybe 'no read-ahead'\n>>\n>> megacli -LDSetProp NORA -LALL -aALL\n>>\n>> It's worth a try, maybe it somehow conflicts with the way kernel\n>> handles read-ahead or something. I find these adaptive heuristics\n>> a bit unpredictable ...\n>>\n>> Another thing - I see the patrol reads are enabled. Can you disable\n>> that and try how that affects the performance?\n>>\n>> 2) Write performance behaviour, that's much more suspicious ...\n>>\n>> Not sure if it's related to the read performance issues.\n>>\n>> Tomas\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n> --\n> César Martín Pérez\n> [email protected]\n>\n>\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nHi,Finally the problem was BIOS configuration. DBPM had was set to \"Active Power Controller\" I changed this to \"Max Performance\". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\nNow wirite speed are 550MB/s and read 1,1GB/s.Thank you all for your advice.El 9 de abril de 2012 18:24, Cesar Martin <[email protected]> escribió:\nHi,Today I'm doing new benchmarks with RA, NORA, WB and WT in the controller:\nWith NORA-----------------dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=16384\n16384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 318,306 s, 432 MB/sWith RA\n------------dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 179,712 s, 765 MB/s\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 202,948 s, 677 MB/s\ndd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 213,157 s, 645 MB/s\nWith Adaptative RA-----------------[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in16384+0 records out\n137438953472 bytes (137 GB) copied, 169,533 s, 811 MB/s[root@cltbbdd01 ~]# dd if=/vol02/bonnie/DD of=/dev/null bs=8M count=1638416384+0 records in\n16384+0 records out137438953472 bytes (137 GB) copied, 207,223 s, 663 MB/s\nIt's very strange the differences between the same test under same conditions... It seems thah adaptative read ahead is the best solution.For write test, I apply tuned-adm throughput-performance, that change IO elevator to deadline and grow up vm.dirty_ratio to 40.... ?¿?¿?\nWith WB-------------dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 539,041 s, 255 MB/s\n\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 505,695 s, 272 MB/s\nEnforce WB-----------------dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out\n137438953472 bytes (137 GB) copied, 662,538 s, 207 MB/s\nWith WT--------------dd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 750,615 s, 183 MB/s\nI think that this results are more logical... WT results in bad performance and differences, inside the same test, are minimum.Later I have put pair of dd at same time: \ndd if=/dev/zero of=/vol02/bonnie/DD2 bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 633,613 s, 217 MB/s\n\ndd if=/dev/zero of=/vol02/bonnie/DD bs=8M count=1638416384+0 records in16384+0 records out137438953472 bytes (137 GB) copied, 732,759 s, 188 MB/s\nIs very strange, that with parallel DD I take 400MBps. It's like if Centos have limit in IO throughput of a process...\nEl 5 de abril de 2012 22:06, Tomas Vondra <[email protected]> escribió:\nOn 5.4.2012 20:43, Merlin Moncure wrote:\n> The original problem is read based performance issue though and this\n> will not have any affect on that whatsoever (although it's still\n> excellent advice). Also dd should bypass the o/s buffer cache. I\n> still pretty much convinced that there is a fundamental performance\n> issue with the raid card dell needs to explain.\n\nWell, there are two issues IMHO.\n\n1) Read performance that's not exactly as good as one'd expect from a\n 12 x 15k SAS RAID10 array. Given that the 15k Cheetah drives usually\n give like 170 MB/s for sequential reads/writes. I'd definitely\n expect more than 533 MB/s when reading the data. At least something\n near 1GB/s (equal to 6 drives).\n\n Hmm, the dd read performance seems to grow over time - I wonder if\n this is the issue with adaptive read policy, as mentioned in the\n xbitlabs report.\n\n Cesar, can you set the read policy to a 'read ahead'\n\n megacli -LDSetProp RA -LALL -aALL\n\n or maybe 'no read-ahead'\n\n megacli -LDSetProp NORA -LALL -aALL\n\n It's worth a try, maybe it somehow conflicts with the way kernel\n handles read-ahead or something. I find these adaptive heuristics\n a bit unpredictable ...\n\n Another thing - I see the patrol reads are enabled. Can you disable\n that and try how that affects the performance?\n\n2) Write performance behaviour, that's much more suspicious ...\n\n Not sure if it's related to the read performance issues.\n\nTomas\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- César Martín Pé[email protected]\n\n-- César Martín Pé[email protected]",
"msg_date": "Mon, 16 Apr 2012 16:13:42 +0200",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Mon, Apr 16, 2012 at 8:13 AM, Cesar Martin <[email protected]> wrote:\n> Hi,\n>\n> Finally the problem was BIOS configuration. DBPM had was set to \"Active\n> Power Controller\" I changed this to \"Max\n> Performance\". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\n> Now wirite speed are 550MB/s and read 1,1GB/s.\n\nWhy in the world would a server be delivered to a customer with such a\nsetting turned on? ugh.\n",
"msg_date": "Mon, 16 Apr 2012 09:45:57 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Mon, Apr 16, 2012 at 10:45 AM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Apr 16, 2012 at 8:13 AM, Cesar Martin <[email protected]> wrote:\n>> Hi,\n>>\n>> Finally the problem was BIOS configuration. DBPM had was set to \"Active\n>> Power Controller\" I changed this to \"Max\n>> Performance\". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\n>> Now wirite speed are 550MB/s and read 1,1GB/s.\n>\n> Why in the world would a server be delivered to a customer with such a\n> setting turned on? ugh.\n\nlikely informal pressure to reduce power consumption. anyways, this\nverifies my suspicion that it was a dell problem. in my dealings with\nthem, you truly have to threaten to send the server back then the\nsolution magically appears. don't spend time and money playing their\n'qualified environment' game -- it never works...just tell them to\nshove it.\n\nthere are a number of second tier vendors that give good value and\nallow you to to things like install your own disk drives without\ngetting your support terminated. of course, you lose the 'enterprise\nsupport', to which I give a value of approximately zero.\n\nmerlin\n",
"msg_date": "Mon, 16 Apr 2012 11:08:44 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Mon, Apr 16, 2012 at 10:08 AM, Merlin Moncure <[email protected]> wrote:\n> On Mon, Apr 16, 2012 at 10:45 AM, Scott Marlowe <[email protected]> wrote:\n>> On Mon, Apr 16, 2012 at 8:13 AM, Cesar Martin <[email protected]> wrote:\n>>> Hi,\n>>>\n>>> Finally the problem was BIOS configuration. DBPM had was set to \"Active\n>>> Power Controller\" I changed this to \"Max\n>>> Performance\". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\n>>> Now wirite speed are 550MB/s and read 1,1GB/s.\n>>\n>> Why in the world would a server be delivered to a customer with such a\n>> setting turned on? ugh.\n>\n> likely informal pressure to reduce power consumption. anyways, this\n> verifies my suspicion that it was a dell problem. in my dealings with\n> them, you truly have to threaten to send the server back then the\n> solution magically appears. don't spend time and money playing their\n> 'qualified environment' game -- it never works...just tell them to\n> shove it.\n>\n> there are a number of second tier vendors that give good value and\n> allow you to to things like install your own disk drives without\n> getting your support terminated. of course, you lose the 'enterprise\n> support', to which I give a value of approximately zero.\n\nDell's support never even came close to what I used to get from Aberdeen.\n",
"msg_date": "Mon, 16 Apr 2012 10:18:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "> From: Scott Marlowe <[email protected]>\n>On Mon, Apr 16, 2012 at 8:13 AM, Cesar Martin <[email protected]> wrote:\n>> Hi,\n>>\n>> Finally the problem was BIOS configuration. DBPM had was set to \"Active\n>> Power Controller\" I changed this to \"Max\n>> Performance\". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\n>> Now wirite speed are 550MB/s and read 1,1GB/s.\n>\n>Why in the world would a server be delivered to a customer with such a\n>setting turned on? ugh.\n\n\nBecause it's Dell and that's what they do. \n\n\nWhen our R910s arrived, despite them knowing what we were using them for, they'd installed the memory to use only one channel per cpu. Burried deep in their manual I discovered that they called this \"power optimised\" mode and I had to buy a whole extra bunch of risers to be able to use all of the channels properly.\n\nIf it wasn't for proper load testing, and Greg Smiths stream scaling tests I don't think I'd even have spotted it.\n",
"msg_date": "Mon, 16 Apr 2012 17:31:47 +0100 (BST)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
},
{
"msg_contents": "On Mon, Apr 16, 2012 at 10:31 AM, Glyn Astill <[email protected]> wrote:\n>> From: Scott Marlowe <[email protected]>\n>>On Mon, Apr 16, 2012 at 8:13 AM, Cesar Martin <[email protected]> wrote:\n>>> Hi,\n>>>\n>>> Finally the problem was BIOS configuration. DBPM had was set to \"Active\n>>> Power Controller\" I changed this to \"Max\n>>> Performance\". http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx\n>>> Now wirite speed are 550MB/s and read 1,1GB/s.\n>>\n>>Why in the world would a server be delivered to a customer with such a\n>>setting turned on? ugh.\n>\n>\n> Because it's Dell and that's what they do.\n>\n>\n> When our R910s arrived, despite them knowing what we were using them for, they'd installed the memory to use only one channel per cpu. Burried deep in their manual I discovered that they called this \"power optimised\" mode and I had to buy a whole extra bunch of risers to be able to use all of the channels properly.\n>\n> If it wasn't for proper load testing, and Greg Smiths stream scaling tests I don't think I'd even have spotted it.\n\nSee and that's where a small technically knowledgeable supplier is so\ngreat. \"No you don't want 8 8G dimms, you want 16 4G dimms.\" etc.\n",
"msg_date": "Mon, 16 Apr 2012 11:47:52 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: H800 + md1200 Performance problem"
}
] |
[
{
"msg_contents": "Hi,\n\ni've ran into a planning problem.\n\nDedicated PostgreSQL Server:\n\"PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-46), 64-bit\"\nMemory: 8GB\n4CPUs\n\nThe problem is reduced to the following: there are 2 tables:\n-product (3millions rows, 1GB)\n-product_parent (3000rows, 0.5MB)\n\nIf effective_cache_size has a greater value (6GB), this select has a bad\nplanning and long query time (2000ms):\n\nselect distinct product_code from product p_\ninner join product_parent par_ on p_.parent_id=par_.id\nwhere par_.parent_name like 'aa%' limit 2\n\n\nIf effective_cache_size is smaller (32MB), planning is ok and query is\nfast. (10ms)\nIn the worst case (effective_cache_size=6GB) the speed depends on the value\nof 'limit' (in select): if it is smaller, query is slower. (12ms)\n\n\nGood planning: http://explain.depesz.com/s/0FD\n\"Limit (cost=3704.00..3704.02 rows=2 width=5) (actual time=0.215..0.217\nrows=1 loops=1)\"\n\" -> HashAggregate (cost=3704.00..3712.85 rows=885 width=5) (actual\ntime=0.213..0.215 rows=1 loops=1)\"\n\" -> Nested Loop (cost=41.08..3701.79 rows=885 width=5) (actual\ntime=0.053..0.175 rows=53 loops=1)\"\n\" -> Index Scan using telepulesbugreport_nev_idx on\nproduct_parent par_ (cost=0.00..8.27 rows=1 width=4) (actual\ntime=0.016..0.018 rows=1 loops=1)\"\n\" Index Cond: (((parent_name)::text ~>=~ 'aa'::text) AND\n((parent_name)::text ~<~ 'ab'::text))\"\n\" Filter: ((parent_name)::text ~~ 'aa%'::text)\"\n\" -> Bitmap Heap Scan on product p_ (cost=41.08..3680.59\nrows=1034 width=9) (actual time=0.033..0.125 rows=53 loops=1)\"\n\" Recheck Cond: (parent_id = par_.id)\"\n\" -> Bitmap Index Scan on\nkapubugreport_telepules_id_idx (cost=0.00..40.82 rows=1034 width=0)\n(actual time=0.024..0.024 rows=53 loops=1)\"\n\" Index Cond: (parent_id = par_.id)\"\n\"Total runtime: 0.289 ms\"\n\n\nBad planning: http://explain.depesz.com/s/yBh\n\"Limit (cost=0.00..854.37 rows=2 width=5) (actual time=1799.209..4344.041\nrows=1 loops=1)\"\n\" -> Unique (cost=0.00..378059.84 rows=885 width=5) (actual\ntime=1799.207..4344.038 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..378057.63 rows=885 width=5) (actual\ntime=1799.204..4344.020 rows=53 loops=1)\"\n\" Join Filter: (p_.parent_id = par_.id)\"\n\" -> Index Scan using kapubugreport_irsz_telepules_id_idx on\nproduct p_ (cost=0.00..334761.59 rows=2885851 width=9) (actual\ntime=0.015..1660.449 rows=2884172 loops=1)\"\n\" -> Materialize (cost=0.00..8.27 rows=1 width=4) (actual\ntime=0.000..0.000 rows=1 loops=2884172)\"\n\" -> Index Scan using telepulesbugreport_nev_idx on\nproduct_parent par_ (cost=0.00..8.27 rows=1 width=4) (actual\ntime=0.013..0.014 rows=1 loops=1)\"\n\" Index Cond: (((parent_name)::text ~>=~\n'aa'::text) AND ((parent_name)::text ~<~ 'ab'::text))\"\n\" Filter: ((parent_name)::text ~~ 'aa%'::text)\"\n\"Total runtime: 4344.083 ms\"\n\n\n\n\n\nschema:\n\nCREATE TABLE product\n(\n id serial NOT NULL,\n parent_id integer NOT NULL,\n product_code character varying COLLATE pg_catalog.\"C\" NOT NULL,\n product_name character varying NOT NULL\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE product\n OWNER TO aa;\n\n\nCREATE INDEX product_code_parent_id_idx\n ON product\n USING btree\n (product_code COLLATE pg_catalog.\"C\" , parent_id );\n\n\nCREATE INDEX product_name_idx\n ON product\n USING btree\n (product_name COLLATE pg_catalog.\"default\" );\n\n\nCREATE INDEX product_parent_id_idx\n ON product\n USING btree\n (parent_id );\n\n\nCREATE INDEX product_parent_id_ocde_idx\n ON product\n USING btree\n (parent_id , product_code COLLATE pg_catalog.\"C\" );\n\n\nCREATE TABLE product_parent\n(\n id serial NOT NULL,\n parent_name character varying NOT NULL,\n CONSTRAINT telepulesbugreport_pkey PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE product_parent\n OWNER TO aa;\n\nCREATE INDEX product_parent_name_idx\n ON product_parent\n USING btree\n (parent_name COLLATE pg_catalog.\"default\" varchar_pattern_ops);\n\n\nI hope you can help me... :)\nBest Regads,\nIstvan\n\nHi,i've ran into a planning problem.Dedicated PostgreSQL Server:\"PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"Memory: 8GB\n4CPUsThe problem is reduced to the following: there are 2 tables: -product (3millions rows, 1GB) -product_parent (3000rows, 0.5MB)If effective_cache_size has a greater value (6GB), this select has a bad planning and long query time (2000ms):\nselect distinct product_code from product p_inner join product_parent par_ on p_.parent_id=par_.id where par_.parent_name like 'aa%' limit 2If effective_cache_size is smaller (32MB), planning is ok and query is fast. (10ms)\nIn the worst case (effective_cache_size=6GB) the speed depends on the value of 'limit' (in select): if it is smaller, query is slower. (12ms)Good planning: http://explain.depesz.com/s/0FD\n\"Limit (cost=3704.00..3704.02 rows=2 width=5) (actual time=0.215..0.217 rows=1 loops=1)\"\" -> HashAggregate (cost=3704.00..3712.85 rows=885 width=5) (actual time=0.213..0.215 rows=1 loops=1)\"\n\" -> Nested Loop (cost=41.08..3701.79 rows=885 width=5) (actual time=0.053..0.175 rows=53 loops=1)\"\" -> Index Scan using telepulesbugreport_nev_idx on product_parent par_ (cost=0.00..8.27 rows=1 width=4) (actual time=0.016..0.018 rows=1 loops=1)\"\n\" Index Cond: (((parent_name)::text ~>=~ 'aa'::text) AND ((parent_name)::text ~<~ 'ab'::text))\"\" Filter: ((parent_name)::text ~~ 'aa%'::text)\"\n\" -> Bitmap Heap Scan on product p_ (cost=41.08..3680.59 rows=1034 width=9) (actual time=0.033..0.125 rows=53 loops=1)\"\" Recheck Cond: (parent_id = par_.id)\"\n\" -> Bitmap Index Scan on kapubugreport_telepules_id_idx (cost=0.00..40.82 rows=1034 width=0) (actual time=0.024..0.024 rows=53 loops=1)\"\" Index Cond: (parent_id = par_.id)\"\n\"Total runtime: 0.289 ms\"Bad planning: http://explain.depesz.com/s/yBh\"Limit (cost=0.00..854.37 rows=2 width=5) (actual time=1799.209..4344.041 rows=1 loops=1)\"\n\" -> Unique (cost=0.00..378059.84 rows=885 width=5) (actual time=1799.207..4344.038 rows=1 loops=1)\"\" -> Nested Loop (cost=0.00..378057.63 rows=885 width=5) (actual time=1799.204..4344.020 rows=53 loops=1)\"\n\" Join Filter: (p_.parent_id = par_.id)\"\" -> Index Scan using kapubugreport_irsz_telepules_id_idx on product p_ (cost=0.00..334761.59 rows=2885851 width=9) (actual time=0.015..1660.449 rows=2884172 loops=1)\"\n\" -> Materialize (cost=0.00..8.27 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=2884172)\"\" -> Index Scan using telepulesbugreport_nev_idx on product_parent par_ (cost=0.00..8.27 rows=1 width=4) (actual time=0.013..0.014 rows=1 loops=1)\"\n\" Index Cond: (((parent_name)::text ~>=~ 'aa'::text) AND ((parent_name)::text ~<~ 'ab'::text))\"\" Filter: ((parent_name)::text ~~ 'aa%'::text)\"\n\"Total runtime: 4344.083 ms\"schema:CREATE TABLE product( id serial NOT NULL, parent_id integer NOT NULL, product_code character varying COLLATE pg_catalog.\"C\" NOT NULL,\n product_name character varying NOT NULL)WITH ( OIDS=FALSE);ALTER TABLE product OWNER TO aa;CREATE INDEX product_code_parent_id_idx ON product USING btree (product_code COLLATE pg_catalog.\"C\" , parent_id );\nCREATE INDEX product_name_idx ON product USING btree (product_name COLLATE pg_catalog.\"default\" );CREATE INDEX product_parent_id_idx ON product USING btree (parent_id );\nCREATE INDEX product_parent_id_ocde_idx ON product USING btree (parent_id , product_code COLLATE pg_catalog.\"C\" );CREATE TABLE product_parent( id serial NOT NULL, parent_name character varying NOT NULL,\n CONSTRAINT telepulesbugreport_pkey PRIMARY KEY (id ))WITH ( OIDS=FALSE);ALTER TABLE product_parent OWNER TO aa;CREATE INDEX product_parent_name_idx ON product_parent USING btree\n (parent_name COLLATE pg_catalog.\"default\" varchar_pattern_ops);I hope you can help me... :)Best Regads,Istvan",
"msg_date": "Tue, 3 Apr 2012 17:11:55 +0200",
"msg_from": "Istvan Endredy <[email protected]>",
"msg_from_op": true,
"msg_subject": "bad planning with 75% effective_cache_size"
},
{
"msg_contents": "Istvan Endredy <[email protected]> wrote:\n \n> i've ran into a planning problem.\n \n> If effective_cache_size has a greater value (6GB), this select has\n> a bad planning and long query time (2000ms):\n \nCould you try that configuration with one change and let us know how\nit goes?:\n \nset cpu_tuple_cost = '0.05';\n \nI've seen an awful lot of queries benefit from a higher value for\nthat setting, and I'm starting to think a change to that default is\nin order.\n \n-Kevin\n",
"msg_date": "Thu, 05 Apr 2012 10:41:57 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
},
{
"msg_contents": "Hi Kevin,\n\nthanks for the suggestion. It was my 1st task to try this after Easter. :)\n\nSorry to say this parameter doesn't help:\n\nbad planning:\nset cpu_tuple_cost = '0.05';\nset effective_cache_size to '6GB';\n1622ms\nhttp://explain.depesz.com/s/vuO\n\nor\nset cpu_tuple_cost = '0.01';\nset effective_cache_size to '6GB';\n1634ms\nhttp://explain.depesz.com/s/YqS\n\ngood planning:\nset effective_cache_size to '32MB';\nset cpu_tuple_cost = '0.05';\n22ms\nhttp://explain.depesz.com/s/521\n\nor\nset effective_cache_size to '32MB';\nset cpu_tuple_cost = '0.01';\n12ms\nhttp://explain.depesz.com/s/Ypc\n\nthis was the query:\nselect distinct product_code from product p_\ninner join product_parent par_ on p_.parent_id=par_.id\nwhere par_.parent_name like 'aa%' limit 2\n\n\nAny idea?\nThanks in advance,\nIstvan\n\n\n2012/4/5 Kevin Grittner <[email protected]>\n\n> Istvan Endredy <[email protected]> wrote:\n>\n> > i've ran into a planning problem.\n>\n> > If effective_cache_size has a greater value (6GB), this select has\n> > a bad planning and long query time (2000ms):\n>\n> Could you try that configuration with one change and let us know how\n> it goes?:\n>\n> set cpu_tuple_cost = '0.05';\n>\n> I've seen an awful lot of queries benefit from a higher value for\n> that setting, and I'm starting to think a change to that default is\n> in order.\n>\n> -Kevin\n>\n\nHi Kevin,thanks for the suggestion. It was my 1st task to try this after Easter. :)Sorry to say this parameter doesn't help:bad planning:set cpu_tuple_cost = '0.05';set effective_cache_size to '6GB';\n1622mshttp://explain.depesz.com/s/vuOorset cpu_tuple_cost = '0.01';\nset effective_cache_size to '6GB';\n1634mshttp://explain.depesz.com/s/YqSgood planning:set effective_cache_size to '32MB';set cpu_tuple_cost = '0.05';22mshttp://explain.depesz.com/s/521\nor set effective_cache_size to '32MB';\nset cpu_tuple_cost = '0.01';\n12mshttp://explain.depesz.com/s/Ypc\nthis was the query:select distinct product_code from product p_\ninner join product_parent par_ on p_.parent_id=par_.id \nwhere par_.parent_name like 'aa%' limit 2\nAny idea?Thanks in advance,Istvan2012/4/5 Kevin Grittner <[email protected]>\nIstvan Endredy <[email protected]> wrote:\n\n> i've ran into a planning problem.\n\n> If effective_cache_size has a greater value (6GB), this select has\n> a bad planning and long query time (2000ms):\n\nCould you try that configuration with one change and let us know how\nit goes?:\n\nset cpu_tuple_cost = '0.05';\n\nI've seen an awful lot of queries benefit from a higher value for\nthat setting, and I'm starting to think a change to that default is\nin order.\n\n-Kevin",
"msg_date": "Tue, 10 Apr 2012 09:19:49 +0200",
"msg_from": "Istvan Endredy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
},
{
"msg_contents": "could you try to set the statistics parameter to 1000 (ALTER TABLE SET\nSTATISTICS) for these tables, then run analyze and try again?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/bad-planning-with-75-effective-cache-size-tp5620363p5642356.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sun, 15 Apr 2012 13:01:59 -0700 (PDT)",
"msg_from": "Filippos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
},
{
"msg_contents": "How about\n\nwith par_ as (select * from product_parent where parent_name like 'aa%' )\nselect distinct product_code from product p_\ninner join par_ on p_.parent_id=par_.id\nlimit 2\n\n?\n\n\n2012/4/3 Istvan Endredy <[email protected]>\n\n> Hi,\n>\n> i've ran into a planning problem.\n>\n>\n> select distinct product_code from product p_\n> inner join product_parent par_ on p_.parent_id=par_.id\n> where par_.parent_name like 'aa%' limit 2\n>\n>\n> If effective_cache_size is smaller (32MB), planning is ok and query is\n> fast. (10ms)\n> In the worst case (effective_cache_size=6GB) the speed depends on the\n> value of 'limit' (in select): if it is smaller, query is slower. (12ms)\n>\n>\n>\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nHow aboutwith par_ as (select * from product_parent where parent_name like 'aa%' )select distinct product_code from product p_inner join par_ on p_.parent_id=par_.id limit 2\n?2012/4/3 Istvan Endredy <[email protected]>\nHi,i've ran into a planning problem.select distinct product_code from product p_inner join product_parent par_ on p_.parent_id=par_.id where par_.parent_name like 'aa%' limit 2\nIf effective_cache_size is smaller (32MB), planning is ok and query is fast. (10ms)\nIn the worst case (effective_cache_size=6GB) the speed depends on the value of 'limit' (in select): if it is smaller, query is slower. (12ms)-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Wed, 18 Apr 2012 09:51:19 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
}
] |
[
{
"msg_contents": "Hi All;\n\nI have a query that wants to update a table based on a join like this:\n\nupdate test_one\nset f_key = t.f_key\nfrom\n upd_temp1 t,\n test_one t2\nwhere\n t.id_number = t2.id_number\n\nupd_temp1 has 248,762 rows\ntest_one has 248,762 rows\n\ntest_one has an index on f_key and an index on id_number\nupd_temp1 has an index on id_number\n\n\nThe explain plan looks like this:\n Update (cost=0.00..3212284472.90 rows=256978208226 width=121)\n -> Nested Loop (cost=0.00..3212284472.90 rows=256978208226 width=121)\n -> Merge Join (cost=0.00..51952.68 rows=1033028 width=20)\n Merge Cond: ((t.id_number)::text = (t2.id_number)::text)\n -> Index Scan using idx_tmp_001a on upd_temp1 t \n(cost=0.00..12642.71 rows=248762 width=\n52)\n -> Materialize (cost=0.00..23814.54 rows=248762 width=17)\n -> Index Scan using index_idx1 on test_one t2 \n(cost=0.00..23192.64 rows\n=248762 width=17)\n -> Materialize (cost=0.00..6750.43 rows=248762 width=101)\n -> Seq Scan on test_one (cost=0.00..5506.62 \nrows=248762 width=101)\n(9 rows)\n\n\nThe update never finishes, we always stop it after about 30min to an hour.\n\nAnyone have any thoughts per boosting performance?\n\nThanks in advance\n\n\n\n",
"msg_date": "Tue, 03 Apr 2012 11:29:56 -0600",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update join performance issues"
},
{
"msg_contents": "Kevin Kempter <[email protected]> wrote:\n \n> update test_one\n> set f_key = t.f_key\n> from\n> upd_temp1 t,\n> test_one t2\n> where\n> t.id_number = t2.id_number\n \nAs written above, it is joining the two table references in the FROM\nclause and updating every row in test_one with every row in the JOIN\n-- which is probably not what you want. Having a FROM clause on an\nUPDATE statement is not something which is covered by the standard,\nand different products have implemented different semantics for\nthat. For example, under MS SQL Server, the first reference in the\nFROM clause to the target of the UPDATE is considered to be the same\nreference; so the above statement would be accepted, but do\nsomething very different.\n \nYou probably want this:\n \nupdate test_one t2\nset f_key = t.f_key\nfrom\n upd_temp1 t\nwhere\n t.id_number = t2.id_number\n \n-Kevin\n",
"msg_date": "Tue, 03 Apr 2012 12:37:50 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update join performance issues"
},
{
"msg_contents": "\n\nOn 04/03/2012 01:29 PM, Kevin Kempter wrote:\n> Hi All;\n>\n> I have a query that wants to update a table based on a join like this:\n>\n> update test_one\n> set f_key = t.f_key\n> from\n> upd_temp1 t,\n> test_one t2\n> where\n> t.id_number = t2.id_number\n\n\nWhy is test_one in the from clause? update joins whatever is in the from \nclause to the table being updated. You almost never need it repeated in \nthe from clause.\n\n\ncheers\n\nandrew\n\n\n\n",
"msg_date": "Tue, 03 Apr 2012 13:39:05 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update join performance issues"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> wrote:\n \n> Why is test_one in the from clause? update joins whatever is in\n> the from clause to the table being updated. You almost never need\n> it repeated in the from clause.\n \nThis is actually one of the nastier \"gotchas\" in converting from\nSybase ASE or MS SQL Server to PostgreSQL -- there are syntactically\nidentical UPDATE statements with very different semantics when a\nFROM clause is used in an UPDATE statement. You need to do what the\nOP was showing to use an alias with the target table under those\nother products.\n \nI suppose it might be possible to generate a warning when it appears\nthat someone is making this mistake, but it wouldn't be easy and\nwould probably not be worth the carrying cost. The test would need\nto be something like:\n \n(1) The relation which is the target of the UPDATE has no alias.\n(2) There is a FROM clause which included the target relation (with\n an alias).\n(3) There aren't any joining references between the UPDATE target\n and the relation(s) in the FROM clause.\n \n-Kevin\n",
"msg_date": "Tue, 03 Apr 2012 12:49:54 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update join performance issues"
},
{
"msg_contents": "Kevin Kempter wrote on 03.04.2012 19:29:\n> Hi All;\n>\n> I have a query that wants to update a table based on a join like this:\n>\n> update test_one\n> set f_key = t.f_key\n> from\n> upd_temp1 t,\n> test_one t2\n> where\n> t.id_number = t2.id_number\n>\n> upd_temp1 has 248,762 rows\n> test_one has 248,762 rows\n>\n\nTo extend on what Kevin has already answere:\n\nQuote from the manual:\n \"Note that the target table must not appear in the from_list, unless you intend a self-join\"\n\n\n",
"msg_date": "Tue, 03 Apr 2012 19:51:21 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update join performance issues"
},
{
"msg_contents": "On Tue, Apr 3, 2012 at 12:29 PM, Kevin Kempter\n<[email protected]> wrote:\n> Hi All;\n>\n> I have a query that wants to update a table based on a join like this:\n>\n> update test_one\n> set f_key = t.f_key\n> from\n> upd_temp1 t,\n> test_one t2\n> where\n> t.id_number = t2.id_number\n>\n> upd_temp1 has 248,762 rows\n> test_one has 248,762 rows\n>\n> test_one has an index on f_key and an index on id_number\n> upd_temp1 has an index on id_number\n>\n>\n> The explain plan looks like this:\n> Update (cost=0.00..3212284472.90 rows=256978208226 width=121)\n> -> Nested Loop (cost=0.00..3212284472.90 rows=256978208226 width=121)\n> -> Merge Join (cost=0.00..51952.68 rows=1033028 width=20)\n> Merge Cond: ((t.id_number)::text = (t2.id_number)::text)\n> -> Index Scan using idx_tmp_001a on upd_temp1 t\n> (cost=0.00..12642.71 rows=248762 width=\n> 52)\n> -> Materialize (cost=0.00..23814.54 rows=248762 width=17)\n> -> Index Scan using index_idx1 on test_one t2\n> (cost=0.00..23192.64 rows\n> =248762 width=17)\n> -> Materialize (cost=0.00..6750.43 rows=248762 width=101)\n> -> Seq Scan on test_one (cost=0.00..5506.62 rows=248762\n> width=101)\n> (9 rows)\n>\n>\n> The update never finishes, we always stop it after about 30min to an hour.\n>\n> Anyone have any thoughts per boosting performance?\n\nto add:\n\nreading explain output is an art form all onto itself but the\nfollowing is a giant screaming red flag:\nrows=256978208226\n\nunless of course you're trying to update that many rows, this is\ntelling you that there is an unconstrained join in there somewhere as\nothers have noted.\n\nmerlin\n",
"msg_date": "Tue, 3 Apr 2012 15:43:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update join performance issues"
}
] |
[
{
"msg_contents": "Howdy,\n\nWhat is/is there a replacement for pg_autovacuum in PG9.0+ ? \n\nI haven't had much luck looking for it in the docs. \n\nThanks!\n\nDave\n",
"msg_date": "Tue, 3 Apr 2012 18:36:33 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_autovacuum in PG9.x"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David Kerr\nSent: Wednesday, 4 April 2012 11:37 AM\nTo: [email protected]\nSubject: [PERFORM] pg_autovacuum in PG9.x\n\nHowdy,\n\nWhat is/is there a replacement for pg_autovacuum in PG9.0+ ? \n\nI haven't had much luck looking for it in the docs. \n\nThanks!\n\nDave\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nHi Dave,\nIt's part of core now: http://www.postgresql.org/docs/9.1/static/routine-vacuuming.html#AUTOVACUUM\n",
"msg_date": "Wed, 4 Apr 2012 01:40:57 +0000",
"msg_from": "Brett Mc Bride <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_autovacuum in PG9.x"
},
{
"msg_contents": "On 04/03/2012 06:40 PM, Brett Mc Bride wrote:\n\n > Hi Dave,\n > It's part of core now: http://www.postgresql.org/docs/9.1/static \n/routine-vacuuming.html#AUTOVACUUM\n\nAH awesome, thanks.\n",
"msg_date": "Tue, 03 Apr 2012 18:51:30 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_autovacuum in PG9.x"
}
] |
[
{
"msg_contents": "Hi All,\n\n\nI am new in using postgresSQL, I now support a system that been\nrunning on postgressql. Recently I found that the database are\nconsuming the diskspace rapidly, it starting from 9GB and it now grow\nuntil 40GB within 4-5 month.\n\nI try to do a full vacuum to the database but then i get this error\n\nNOTICE: number of page slots needed (1277312) exceeds max_fsm_pages\n(819200)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\"\nto a value over 1277312.\nVACUUM\n\nI did a vacuum verbose.\npostgres=# vacuum verbose;\n\nand below is the result i got.\n\nINFO: free space map contains 1045952 pages in 1896 relations\nDETAIL: A total of 819200 page slots are in use (including overhead).\n1114192 page slots are required to track all free space.\nCurrent limits are: 819200 page slots, 2000 relations, using 5007 kB.\nNOTICE: number of page slots needed (1114192) exceeds max_fsm_pages\n(819200)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\"\nto a value over 1114192.\nVACUUM\n\nAs from the postgres documentation, it was advice to set it to 20K to\n200K which my current setting is set to 819200 which also over 200K\nalready, so i just wonder what is the max number that i can set for\nthe max_fsm_pages?\n\nIs that any impact if i set the value to over 2M ?\n\nThanks.\n\nRegards,\nChio Chuan\n",
"msg_date": "Wed, 4 Apr 2012 02:22:01 -0700 (PDT)",
"msg_from": "ahchuan <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql.conf setting for max_fsm_pages"
},
{
"msg_contents": "On 04/04/2012 05:22 AM, ahchuan wrote:\n> Hi All,\n>\n>\n> I am new in using postgresSQL, I now support a system that been\n> running on postgressql. Recently I found that the database are\n> consuming the diskspace rapidly, it starting from 9GB and it now grow\n> until 40GB within 4-5 month.\n>\n> I try to do a full vacuum to the database but then i get this error\n>\n> NOTICE: number of page slots needed (1277312) exceeds max_fsm_pages\n> (819200)\n> HINT: Consider increasing the configuration parameter \"max_fsm_pages\"\n> to a value over 1277312.\n> VACUUM\nIf you using max_fsm_pages, you are using the version 8.3. I recommend \nto you that\nyou should update your system to major version. In PostgreSQL 9.0, for \nexample, VACUUM\nFULL was rewritten and it does a better job.\nTry to use autovacumm = on always\n\n> I did a vacuum verbose.\n> postgres=# vacuum verbose;\n>\n> and below is the result i got.\n>\n> INFO: free space map contains 1045952 pages in 1896 relations\n> DETAIL: A total of 819200 page slots are in use (including overhead).\n> 1114192 page slots are required to track all free space.\n> Current limits are: 819200 page slots, 2000 relations, using 5007 kB.\n> NOTICE: number of page slots needed (1114192) exceeds max_fsm_pages\n> (819200)\n> HINT: Consider increasing the configuration parameter \"max_fsm_pages\"\n> to a value over 1114192.\n> VACUUM\n\n\nAs from the postgres documentation, it was advice to set it to 20K to\n200K which my current setting is set to 819200 which also over 200K\nalready, so i just wonder what is the max number that i can set for\nthe max_fsm_pages?\n\nMy advice that you have to test your environment with a double value to \n1114192,\npostgres# SET max_fsm_pages = 2228384; if you need to use 8.3 versions yet.\n\nBut, again, you should upgrade your system to major version. There are a \nlot of performance improvements\nin the new versions.\n\n>\n> Is that any impact if i set the value to over 2M ?\n>\n> Thanks.\n>\n> Regards,\n> Chio Chuan\n>\n\n-- \nMarcos Luis Ort�z Valmaseda (@marcosluis2186)\n Data Engineer at UCI\n http://marcosluis2186.posterous.com\n\n\n\n10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...\nCONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION\n\nhttp://www.uci.cu\nhttp://www.facebook.com/universidad.uci\nhttp://www.flickr.com/photos/universidad_uci\n\n\n\n\n\n\n\n On 04/04/2012 05:22 AM, ahchuan wrote:\n \nHi All,\n\n\nI am new in using postgresSQL, I now support a system that been\nrunning on postgressql. Recently I found that the database are\nconsuming the diskspace rapidly, it starting from 9GB and it now grow\nuntil 40GB within 4-5 month.\n\n\n\n\n\nI try to do a full vacuum to the database but then i get this error\n\nNOTICE: number of page slots needed (1277312) exceeds max_fsm_pages\n(819200)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\"\nto a value over 1277312.\nVACUUM\n\n\nIf you using max_fsm_pages, you are using the\n version 8.3. I recommend to you that\n you should update your system to major version. In PostgreSQL 9.0,\n for example, VACUUM\n FULL was rewritten and it does a better job.\n Try to use autovacumm = on always\n\n\n\nI did a vacuum verbose.\npostgres=# vacuum verbose;\n\nand below is the result i got.\n\nINFO: free space map contains 1045952 pages in 1896 relations\nDETAIL: A total of 819200 page slots are in use (including overhead).\n1114192 page slots are required to track all free space.\nCurrent limits are: 819200 page slots, 2000 relations, using 5007 kB.\nNOTICE: number of page slots needed (1114192) exceeds max_fsm_pages\n(819200)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\"\nto a value over 1114192.\nVACUUM\n\n\n\n\nAs from the postgres documentation, it was advice to set it to 20K to\n200K which my current setting is set to 819200 which also over 200K\nalready, so i just wonder what is the max number that i can set for\nthe max_fsm_pages?\nMy advice that you have to test your environment\n with a double value to 1114192, \n postgres# SET max_fsm_pages = 2228384; if you need to use 8.3\n versions yet.\n\n But, again, you should upgrade your system to major version. There\n are a lot of performance improvements \n in the new versions.\n\n\n\n\nIs that any impact if i set the value to over 2M ?\n\nThanks.\n\nRegards,\nChio Chuan\n\n\n\n\n-- \nMarcos Luis Ortíz Valmaseda (@marcosluis2186)\n Data Engineer at UCI\n http://marcosluis2186.posterous.com",
"msg_date": "Thu, 05 Apr 2012 09:56:44 -0400",
"msg_from": "Marcos Ortiz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf setting for max_fsm_pages"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 3:22 AM, ahchuan <[email protected]> wrote:\n> Hi All,\n>\n>\n> I am new in using postgresSQL, I now support a system that been\n> running on postgressql. Recently I found that the database are\n> consuming the diskspace rapidly, it starting from 9GB and it now grow\n> until 40GB within 4-5 month.\n>\n> I try to do a full vacuum to the database but then i get this error\n>\n> NOTICE: number of page slots needed (1277312) exceeds max_fsm_pages\n> (819200)\n> HINT: Consider increasing the configuration parameter \"max_fsm_pages\"\n> to a value over 1277312.\n> VACUUM\n\nI assume you're on 8.3 or earlier. since 8.3 is going into retirement\nsoon, you'd be well served to look at upgrading.\n\n> As from the postgres documentation, it was advice to set it to 20K to\n> 200K which my current setting is set to 819200 which also over 200K\n> already, so i just wonder what is the max number that i can set for\n> the max_fsm_pages?\n\nThe docs are just a guideline for nominal databases.\n\n> Is that any impact if i set the value to over 2M ?\n\nThe fsm uses 6 bytes of memory for each entry, so 2M = 12Megabytes,\nI'm sure you can spare that much shared memory. I've run it at 10M or\nhigher before on production 8.3 servers.\n\nThe key is to make sure your vacuuming is aggresive enough. Even in\n8.4 and above, where the fsm went away, if autovacuum isn't running or\nisn't aggressive enough you'll get lots of dead space and bloat.\n\nLook at the autovacuum_vacuum_cost_[delay|limit] settings.\n",
"msg_date": "Thu, 5 Apr 2012 08:38:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf setting for max_fsm_pages"
}
] |
[
{
"msg_contents": "Hi All\n\nI have a query where the planner makes a wrong cost estimate, it looks\nlike it underestimates the cost of a \"Bitmap Heap Scan\" compared to an\n\"Index Scan\".\n\nThis it the two plans, I have also pasted them below:\n Slow (189ms): http://explain.depesz.com/s/2Wq\n Fast (21ms): http://explain.depesz.com/s/ThQ\n\nI have run \"VACUUM FULL VERBOSE ANALYZE\". I have configured\nshared_buffers and effective_cache_size, that didn't solve my problem,\nthe estimates was kept the same and both queries got faster.\n\nWhat can I do to fix the cost estimate?\n\nRegards,\nKim Hansen\n\n\n========\n\nyield=> SELECT version();\n version\n-------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by\ngcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\n(1 row)\n\nyield=> explain analyze select \"filtered_demands\".\"pol\" as \"c0\" from\n\"demands\".\"filtered_demands\" as \"filtered_demands\" where\n(\"filtered_demands\".\"pod\" = 'VELAG') group by \"filtered_demands\".\"pol\"\norder by \"filtered_demands\".\"pol\" ASC NULLS LAST;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=38564.80..38564.80 rows=2 width=6) (actual\ntime=188.987..189.003 rows=221 loops=1)\n Sort Key: pol\n Sort Method: quicksort Memory: 35kB\n -> HashAggregate (cost=38564.77..38564.79 rows=2 width=6) (actual\ntime=188.796..188.835 rows=221 loops=1)\n -> Bitmap Heap Scan on filtered_demands\n(cost=566.23..38503.77 rows=24401 width=6) (actual time=6.501..182.634\nrows=18588 loops=1)\n Recheck Cond: (pod = 'VELAG'::text)\n -> Bitmap Index Scan on filtered_demands_pod_pol_idx\n(cost=0.00..560.12 rows=24401 width=0) (actual time=4.917..4.917\nrows=18588 loops=1)\n Index Cond: (pod = 'VELAG'::text)\n Total runtime: 189.065 ms\n(9 rows)\n\nyield=> set enable_bitmapscan = false;\nSET\nyield=> explain analyze select \"filtered_demands\".\"pol\" as \"c0\" from\n\"demands\".\"filtered_demands\" as \"filtered_demands\" where\n(\"filtered_demands\".\"pod\" = 'VELAG') group by \"filtered_demands\".\"pol\"\norder by \"filtered_demands\".\"pol\" ASC NULLS LAST;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.00..76534.33 rows=2 width=6) (actual\ntime=0.028..20.823 rows=221 loops=1)\n -> Index Scan using filtered_demands_pod_pol_idx on\nfiltered_demands (cost=0.00..76473.33 rows=24401 width=6) (actual\ntime=0.027..17.174 rows=18588 loops=1)\n Index Cond: (pod = 'VELAG'::text)\n Total runtime: 20.877 ms\n(4 rows)\n\nyield=>\n\n-- \nKim Rydhof Thor Hansen\nVadgårdsvej 3, 2. tv.\n2860 Søborg\nPhone: +45 3091 2437\n",
"msg_date": "Wed, 4 Apr 2012 15:47:17 +0200",
"msg_from": "Kim Hansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner selects slow \"Bitmap Heap Scan\" when \"Index Scan\" is faster"
},
{
"msg_contents": "Kim Hansen <[email protected]> wrote:\n \n> I have a query where the planner makes a wrong cost estimate, it\n> looks like it underestimates the cost of a \"Bitmap Heap Scan\"\n> compared to an \"Index Scan\".\n \n> What can I do to fix the cost estimate?\n \nCould you try running the query with cpu_tuple_cost = 0.05 and let\nus know how that goes?\n \n-Kevin\n",
"msg_date": "Thu, 05 Apr 2012 10:34:36 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects slow \"Bitmap Heap Scan\" when\n\t\"Index Scan\" is faster"
},
{
"msg_contents": "On Thu, Apr 5, 2012 at 17:34, Kevin Grittner\n<[email protected]> wrote:\n> Kim Hansen <[email protected]> wrote:\n>\n>> I have a query where the planner makes a wrong cost estimate, it\n>> looks like it underestimates the cost of a \"Bitmap Heap Scan\"\n>> compared to an \"Index Scan\".\n>\n>> What can I do to fix the cost estimate?\n>\n> Could you try running the query with cpu_tuple_cost = 0.05 and let\n> us know how that goes?\n>\n\nIt looks like it just increased the estimated cost of both queries by\nabout 1000.\n\nRegards,\nKim\n\n\n===============\n\nyield=> explain analyze select \"filtered_demands\".\"pol\" as \"c0\" from\n\"demands\".\"filtered_demands\" as \"filtered_demands\" where\n(\"filtered_demands\".\"pod\" = 'VELAG') group by \"filtered_demands\".\"pol\"\norder by \"filtered_demands\".\"pol\" ASC NULLS LAST;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=39540.92..39540.92 rows=2 width=6) (actual\ntime=186.833..186.858 rows=221 loops=1)\n Sort Key: pol\n Sort Method: quicksort Memory: 35kB\n -> HashAggregate (cost=39540.81..39540.91 rows=2 width=6) (actual\ntime=186.643..186.678 rows=221 loops=1)\n -> Bitmap Heap Scan on filtered_demands\n(cost=566.23..39479.81 rows=24401 width=6) (actual time=6.154..180.654\nrows=18588 loops=1)\n Recheck Cond: (pod = 'VELAG'::text)\n -> Bitmap Index Scan on filtered_demands_pod_pol_idx\n(cost=0.00..560.12 rows=24401 width=0) (actual time=4.699..4.699\nrows=18588 loops=1)\n Index Cond: (pod = 'VELAG'::text)\n Total runtime: 186.912 ms\n(9 rows)\n\nyield=> set enable_bitmapscan = false;\nSET\nyield=> explain analyze select \"filtered_demands\".\"pol\" as \"c0\" from\n\"demands\".\"filtered_demands\" as \"filtered_demands\" where\n(\"filtered_demands\".\"pod\" = 'VELAG') group by \"filtered_demands\".\"pol\"\norder by \"filtered_demands\".\"pol\" ASC NULLS LAST;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.00..77510.37 rows=2 width=6) (actual\ntime=0.029..20.361 rows=221 loops=1)\n -> Index Scan using filtered_demands_pod_pol_idx on\nfiltered_demands (cost=0.00..77449.37 rows=24401 width=6) (actual\ntime=0.027..16.859 rows=18588 loops=1)\n Index Cond: (pod = 'VELAG'::text)\n Total runtime: 20.410 ms\n(4 rows)\n\nyield=>\n\n\n-- \nKim Rydhof Thor Hansen\nVadgårdsvej 3, 2. tv.\n2860 Søborg\nPhone: +45 3091 2437\n",
"msg_date": "Thu, 5 Apr 2012 18:01:16 +0200",
"msg_from": "Kim Hansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects slow \"Bitmap Heap Scan\" when \"Index\n\tScan\" is faster"
},
{
"msg_contents": "On Wed, Apr 4, 2012 at 6:47 AM, Kim Hansen <[email protected]> wrote:\n> Hi All\n>\n> I have a query where the planner makes a wrong cost estimate, it looks\n> like it underestimates the cost of a \"Bitmap Heap Scan\" compared to an\n> \"Index Scan\".\n>\n> This it the two plans, I have also pasted them below:\n> Slow (189ms): http://explain.depesz.com/s/2Wq\n> Fast (21ms): http://explain.depesz.com/s/ThQ\n\nCould you do explain (analyze, buffers)?\n\nDid you run these queries multiple times in both orders? If you just\nran them once each, in the order indicated, then the bitmap scan may\nhave done the hard work of reading all the needed buffers into cache,\nand the index scan then got to enjoy that cache.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 6 Apr 2012 10:11:37 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects slow \"Bitmap Heap Scan\" when \"Index\n\tScan\" is faster"
},
{
"msg_contents": "Hi all\n\nOn Fri, Apr 6, 2012 at 19:11, Jeff Janes <[email protected]> wrote:\n> On Wed, Apr 4, 2012 at 6:47 AM, Kim Hansen <[email protected]> wrote:\n>> Hi All\n>>\n>> I have a query where the planner makes a wrong cost estimate, it looks\n>> like it underestimates the cost of a \"Bitmap Heap Scan\" compared to an\n>> \"Index Scan\".\n>>\n>> This it the two plans, I have also pasted them below:\n>> Slow (189ms): http://explain.depesz.com/s/2Wq\n>> Fast (21ms): http://explain.depesz.com/s/ThQ\n>\n> Could you do explain (analyze, buffers)?\n\nI have done that now, the log is pasted in below. It looks like every\nbuffer fetched is a hit, I would think that PostgreSQL should know\nthat as almost nothing happens on the server and effective_cache_size\nis configured to 8GB.\n\n> Did you run these queries multiple times in both orders? If you just\n> ran them once each, in the order indicated, then the bitmap scan may\n> have done the hard work of reading all the needed buffers into cache,\n> and the index scan then got to enjoy that cache.\n\nI have run the queries a few times in order to warm up the caches, the\nqueries stabilise on 20ms and 180ms.\n\nRegards,\nKim\n\n========\n\nyield=> explain (analyze,buffers) select \"filtered_demands\".\"pol\" as\n\"c0\" from \"demands\".\"filtered_demands\" as \"filtered_demands\" where\n(\"filtered_demands\".\"pod\" = 'VELAG') group by \"filtered_demands\".\"pol\"\norder by \"filtered_demands\".\"pol\" ASC NULLS LAST;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=38564.80..38564.80 rows=2 width=6) (actual\ntime=185.497..185.520 rows=221 loops=1)\n Sort Key: pol\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=14969\n -> HashAggregate (cost=38564.77..38564.79 rows=2 width=6) (actual\ntime=185.303..185.343 rows=221 loops=1)\n Buffers: shared hit=14969\n -> Bitmap Heap Scan on filtered_demands\n(cost=566.23..38503.77 rows=24401 width=6) (actual time=6.119..179.056\nrows=18588 loops=1)\n Recheck Cond: (pod = 'VELAG'::text)\n Buffers: shared hit=14969\n -> Bitmap Index Scan on filtered_demands_pod_pol_idx\n(cost=0.00..560.12 rows=24401 width=0) (actual time=4.661..4.661\nrows=18588 loops=1)\n Index Cond: (pod = 'VELAG'::text)\n Buffers: shared hit=74\n Total runtime: 185.577 ms\n(13 rows)\n\nyield=> set enable_bitmapscan = false;\nSET\nyield=> explain (analyze,buffers) select \"filtered_demands\".\"pol\" as\n\"c0\" from \"demands\".\"filtered_demands\" as \"filtered_demands\" where\n(\"filtered_demands\".\"pod\" = 'VELAG') group by \"filtered_demands\".\"pol\"\norder by \"filtered_demands\".\"pol\" ASC NULLS LAST;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.00..76534.33 rows=2 width=6) (actual\ntime=0.029..20.202 rows=221 loops=1)\n Buffers: shared hit=18386\n -> Index Scan using filtered_demands_pod_pol_idx on\nfiltered_demands (cost=0.00..76473.33 rows=24401 width=6) (actual\ntime=0.027..16.455 rows=18588 loops=1)\n Index Cond: (pod = 'VELAG'::text)\n Buffers: shared hit=18386\n Total runtime: 20.246 ms\n(6 rows)\n\n\n\n-- \nKim Rydhof Thor Hansen\nVadgårdsvej 3, 2. tv.\n2860 Søborg\nPhone: +45 3091 2437\n",
"msg_date": "Sat, 7 Apr 2012 00:09:36 +0200",
"msg_from": "Kim Hansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects slow \"Bitmap Heap Scan\" when \"Index\n\tScan\" is faster"
},
{
"msg_contents": "On Fri, Apr 6, 2012 at 3:09 PM, Kim Hansen <[email protected]> wrote:\n> Hi all\n>\n> On Fri, Apr 6, 2012 at 19:11, Jeff Janes <[email protected]> wrote:\n>> On Wed, Apr 4, 2012 at 6:47 AM, Kim Hansen <[email protected]> wrote:\n>>> Hi All\n>>>\n>>> I have a query where the planner makes a wrong cost estimate, it looks\n>>> like it underestimates the cost of a \"Bitmap Heap Scan\" compared to an\n>>> \"Index Scan\".\n>>>\n>>> This it the two plans, I have also pasted them below:\n>>> Slow (189ms): http://explain.depesz.com/s/2Wq\n>>> Fast (21ms): http://explain.depesz.com/s/ThQ\n>>\n>> Could you do explain (analyze, buffers)?\n>\n> I have done that now, the log is pasted in below. It looks like every\n> buffer fetched is a hit, I would think that PostgreSQL should know\n> that as almost nothing happens on the server and effective_cache_size\n> is configured to 8GB.\n\nThat almost nothing happens on the server does not enter into it. It\nwould need to know whether the last thing that did happen (no matter\nhow long ago that was) touched the same data that the current query\nneeds to touch.\n\neffective_cache_size is only used when it is anticipated that the same\nblocks will be accessed repeatedly *within the same query*.\nIt is not used to estimate reuse between different queries.\n\n>\n>> Did you run these queries multiple times in both orders? If you just\n>> ran them once each, in the order indicated, then the bitmap scan may\n>> have done the hard work of reading all the needed buffers into cache,\n>> and the index scan then got to enjoy that cache.\n>\n> I have run the queries a few times in order to warm up the caches, the\n> queries stabilise on 20ms and 180ms.\n\nMy first curiosity is not why the estimate is too good for Bitmap\nIndex Scan, but rather why the actual execution is too poor. As far\nas I can see the only explanation for the poor execution is that the\nbitmap scan has gone lossy, so that every tuple in every touched block\nneeds to be rechecked against the where clause. If that is the case,\nit suggests that your work_mem is quite small.\n\nIn 9.2, explain analyze will report the number of tuples filtered out\nby rechecking, but that isn't reported in your version.\n\nIt looks like the planner makes no attempt to predict when a bitmap\nscan will go lossy and then penalize it for the extra rechecks it will\ndo. Since it doesn't know it will be carrying out those extra checks,\nyou can't just increase the tuple or operator costs factors.\n\nSo that may explain why the bitmap is not getting penalized for its\nextra CPU time. But that doesn't explain why the estimated cost is\nsubstantially lower than the index scan. That is probably because the\nbitmap scan expects it is doing more sequential IO and less random IO.\n You could cancel that advantage be setting random_page_cost to about\nthe same as seq_page_cost (which since you indicated most data will be\ncached, would be an appropriate thing to do regardless of this\nspecific issue).\n\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 9 Apr 2012 19:59:00 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects slow \"Bitmap Heap Scan\" when \"Index\n\tScan\" is faster"
},
{
"msg_contents": "On Tue, Apr 10, 2012 at 04:59, Jeff Janes <[email protected]> wrote:\n> On Fri, Apr 6, 2012 at 3:09 PM, Kim Hansen <[email protected]> wrote:\n>\n>> I have run the queries a few times in order to warm up the caches, the\n>> queries stabilise on 20ms and 180ms.\n>\n> My first curiosity is not why the estimate is too good for Bitmap\n> Index Scan, but rather why the actual execution is too poor. As far\n> as I can see the only explanation for the poor execution is that the\n> bitmap scan has gone lossy, so that every tuple in every touched block\n> needs to be rechecked against the where clause. If that is the case,\n> it suggests that your work_mem is quite small.\n>\n> In 9.2, explain analyze will report the number of tuples filtered out\n> by rechecking, but that isn't reported in your version.\n>\n> It looks like the planner makes no attempt to predict when a bitmap\n> scan will go lossy and then penalize it for the extra rechecks it will\n> do. Since it doesn't know it will be carrying out those extra checks,\n> you can't just increase the tuple or operator costs factors.\n\nYou are right, when I increase the work_mem from 1MB to 2MB the time\ndecreases from 180ms to 30ms for the slow query. I have now configured\nthe server to 10MB work_mem.\n\n> So that may explain why the bitmap is not getting penalized for its\n> extra CPU time. But that doesn't explain why the estimated cost is\n> substantially lower than the index scan. That is probably because the\n> bitmap scan expects it is doing more sequential IO and less random IO.\n> You could cancel that advantage be setting random_page_cost to about\n> the same as seq_page_cost (which since you indicated most data will be\n> cached, would be an appropriate thing to do regardless of this\n> specific issue).\n\nI have set seq_page_cost and random_page_cost to 0.1 in order to\nindicate that data is cached, the system now selects the faster index\nscan.\n\nThanks for your help,\n-- \nKim Rydhof Thor Hansen\nVadgårdsvej 3, 2. tv.\n2860 Søborg\nPhone: +45 3091 2437\n",
"msg_date": "Tue, 10 Apr 2012 11:55:46 +0200",
"msg_from": "Kim Hansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects slow \"Bitmap Heap Scan\" when \"Index\n\tScan\" is faster"
}
] |
[
{
"msg_contents": "Hi list!\r\n\r\ni have a table which has 8500000 rows records. i write a java program to update these records. \r\ni use 100 threads to update the records. For example, thread-1 update 1~85000 records; thread-2 update 85001~170000 and so on.\r\nThe update sql's aim is remove the space in the column and it is simple: update poi set py=replace(py,' ','') where id=?;\r\n\r\nBy the program log, i find the database of processing data speed so slow, per thread updating 1000 rows need take 260s.\r\n\r\nBTW: The PG Server is running on a PC. The PC's total memory is 2G and CPU is \"Intel(R) Core(TM)2 Duo E7500 2.93GHz\".\r\nWhen the program's running, CPU just be used 1% and Memory left 112MB.\r\n\r\nIs the PC configuration too low cause the problem ?\r\n\r\nPlease help ~~\r\n\r\n\r\nThanks for any tips,\r\n\r\n\r\n\r\nsuperman0920\n\n\n\n\n\n\n\n\n\n\n\nHi list!\n \n\ni have a table which has 8500000 rows records. i write a java program \nto update these records. \ni use 100 threads to update the records. For example, thread-1 update \n1~85000 records; thread-2 update 85001~170000 and so on.\nThe update sql's aim is remove the space in the column and it \nis simple: update poi set py=replace(py,' \n','') where id=?;\n \nBy the program log, i find the database of \nprocessing data speed so slow, per thread updating 1000 rows need take \n260s.\n \nBTW: The PG Server is running on a \nPC. The PC's total memory is 2G and CPU is \"Intel(R) Core(TM)2 Duo E7500 \n2.93GHz\".\nWhen the program's running, CPU just be used \n1% and Memory left 112MB.\n \nIs the PC configuration too low cause the problem ?\n \nPlease help ~~\n \n \nThanks \nfor any tips,\n\nsuperman0920",
"msg_date": "Wed, 4 Apr 2012 23:52:51 +0800",
"msg_from": "superman0920 <[email protected]>",
"msg_from_op": true,
"msg_subject": "about multiprocessingmassdata"
},
{
"msg_contents": "On 4.4.2012 17:52, superman0920 wrote:\n> Hi list!\n>\n> i have a table which has 8500000 rows records. i write a java program to\n> update these records.\n> i use 100 threads to update the records. For example, thread-1 update\n> 1~85000 records; thread-2 update 85001~170000 and so on.\n> The update sql's aim is remove the space in the column and it is simple:\n> update poi set py=replace(py,' ','') where id=?;\n\nThat's a very naive approach. It's very likely each thread will do an\nindex scan for each update (to evaluate the 'id=?' condition. And that's\ngoing to cost you much more than you gain because index scans are quite\nCPU and I/O intensive.\n\nSimply update the whole table by\n\n UPDATE poi SET py = replace(py, ' ','');\n\nHave you actually tried how this performs or did you guess 'it's\ndefinitely going to be very slow so I'll use multiple threads to make\nthat faster'?\n\nIf you really need to parallelize this, you need to do that differently\n- e.g. use 'ctid' to skip to update a whole page like this:\n\n UPDATE poi SET py = replace(py, ' ','')\n WHERE ctid >= '(n,0)'::tid AND ctid < '(n+1,0)'::tid AND;\n\nwhere 'n' ranges between 0 and number of pages the table (e.g. in pg_class).\n\nBut try the simple UPDATE first, my guess is it's going to be much\nfaster than you expect.\n\nTomas\n",
"msg_date": "Wed, 04 Apr 2012 18:34:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
},
{
"msg_contents": "On 4.4.2012 18:49, superman0920 wrote:\n> Thank you for your reply\n> I tried executing \"UPDATE poi SET py = replace(py, ' ','');\", that took\n> long long time(about 20+ hours) and no error report. Just like locked.\n\n\nOK, that's weird. So we need a bit more details - what PostgreSQL\nversion is this?\n\nHow much space does the table actually occupy? Try this:\n\nSELECT relname, relpages, reltuples FROM pg_class WHERE relname = 'poi';\n\nAnd finally we need EXPLAIN output for both UPDATE commands. Don't post\nthem here directly - put them to explain.depesz.com and post just the link.\n\nFurther, we need to see the actual table definition. Especially if there\nare any triggers or foreign keys on the table?\n\nTomas\n\n",
"msg_date": "Wed, 04 Apr 2012 18:59:51 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
},
{
"msg_contents": "On 5.4.2012 15:44, superman0920 wrote:\n> Sure, i will post that at tomorrow.\n> \n> Today I install PG and MySQL at a Server. I insert 850000 rows record\n> to each db.\n> I execute \"select count(*) from poi_all_new\" at two db.\n> MySQL takes 0.9s\n> PG takes 364s\n\nFirst of all, keep the list ([email protected]) on the\nCC. You keep responding to me directly, therefore others can't respond\nto your messages (and help you).\n\nAre you sure the comparison was fair, i.e. both machines containing the\nsame amount of data (not number of rows, amount of data), configured\nproperly etc.? Have you used the same table structure (how did you\nrepresent geometry data type in MySQL)?\n\nFor example I bet you're using MyISAM. In that case, it's comparing\napples to oranges (or maybe cats, so different it is). MyISAM does not\ndo any MVCC stuff (visibility checking, ...) and simply reads the number\nof rows from a catalogue. PostgreSQL actually has to scan the whole\ntable - that's a big difference. This is probably the only place where\nMySQL (with MyISAM beats PostgreSQL). But once you switch to a proper\nstorage manager (e.g. InnoDB) it'll have to scan the data just like\nPostgreSQL - try that.\n\nAnyway, this benchmark is rubbish because you're not going to do this\nquery often - use queries that actually make sense for the application.\n\nNevertheless, it seems there's something seriously wrong with your\nmachine or the environment (OS), probably I/O.\n\nI've done a quick test - I've created the table (without the 'geometry'\ncolumn because I don't have postgis installed), filled it with one\nmillion of rows and executed 'select count(*)'. See this:\n\n http://pastebin.com/42cAcCqu\n\nThis is what I get:\n\n======================================================================\ntest=# SELECT pg_size_pretty(pg_relation_size('test_table'));\n pg_size_pretty\n----------------\n 1302 MB\n(1 row)\n\ntest=#\ntest=# \\timing on\nTiming is on.\ntest=#\ntest=# SELECT count(*) from test_table;\n count\n---------\n 1000000\n(1 row)\n\nTime: 2026,695 ms\n======================================================================\n\nso it's running the 'count(*)' in two seconds. If I run it again, I get\nthis:\n\n======================================================================\ntest=# SELECT count(*) from test_table;\n count\n---------\n 1000000\n(1 row)\n\nTime: 270,020 ms\n======================================================================\n\nYes, that's 0,27 seconds. And this is *only* my workstation - Core i5 (4\ncores), 8GB of RAM, nothing special.\n\nThese results obviously depend on the data being available in page\ncache. If that's not the case, PostgreSQL needs to read them from the\ndrive (and then it's basically i/o bound) - I can get about 250 MB/s\nfrom my drives, so I get this:\n\n======================================================================\ntest=# SELECT count(*) from test_table;\n count\n---------\n 1000000\n(1 row)\n\nTime: 5088,739 ms\n======================================================================\n\nIf you have slower drives, the dependency is about linear (half the\nspeed -> twice the time). So either your drives are very slow, or\nthere's something rotten.\n\nI still haven's seen iostat / vmstat output ... that'd tell us much more\nabout the causes.\n\nTomas\n",
"msg_date": "Thu, 05 Apr 2012 16:47:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
},
{
"msg_contents": "Tomas Vondra <[email protected]> wrote:\n> On 5.4.2012 15:44, superman0920 wrote:\n \n>> Today I install PG and MySQL at a Server. I insert 850000 rows\n>> record to each db.\n>> I execute \"select count(*) from poi_all_new\" at two db.\n>> MySQL takes 0.9s\n>> PG takes 364s\n \n> Are you sure the comparison was fair, i.e. both machines\n> containing the same amount of data (not number of rows, amount of\n> data), configured properly etc.?\n \nDon't forget the \"hint bits\" issue -- if the count(*) was run\nimmediately after the load (without a chance for autovacuum to get\nin there), all the data was re-written in place to save hint\ninformation. I remember how confusing that was for me the first\ntime I saw it. It's very easy to get a false impression of overall\nPostgreSQL performance from that type of test, and it's the sort of\ntest a lot of people will do on an ad hoc basis.\n \n-Kevin\n",
"msg_date": "Thu, 05 Apr 2012 10:01:25 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
},
{
"msg_contents": "On Thu, Apr 5, 2012 at 9:47 AM, Tomas Vondra <[email protected]> wrote:\n> On 5.4.2012 15:44, superman0920 wrote:\n>> Sure, i will post that at tomorrow.\n>>\n>> Today I install PG and MySQL at a Server. I insert 850000 rows record\n>> to each db.\n>> I execute \"select count(*) from poi_all_new\" at two db.\n>> MySQL takes 0.9s\n>> PG takes 364s\n>\n> First of all, keep the list ([email protected]) on the\n> CC. You keep responding to me directly, therefore others can't respond\n> to your messages (and help you).\n>\n> Are you sure the comparison was fair, i.e. both machines containing the\n> same amount of data (not number of rows, amount of data), configured\n> properly etc.? Have you used the same table structure (how did you\n> represent geometry data type in MySQL)?\n>\n> For example I bet you're using MyISAM. In that case, it's comparing\n> apples to oranges (or maybe cats, so different it is). MyISAM does not\n> do any MVCC stuff (visibility checking, ...) and simply reads the number\n> of rows from a catalogue. PostgreSQL actually has to scan the whole\n> table - that's a big difference. This is probably the only place where\n> MySQL (with MyISAM beats PostgreSQL). But once you switch to a proper\n> storage manager (e.g. InnoDB) it'll have to scan the data just like\n> PostgreSQL - try that.\n>\n> Anyway, this benchmark is rubbish because you're not going to do this\n> query often - use queries that actually make sense for the application.\n>\n> Nevertheless, it seems there's something seriously wrong with your\n> machine or the environment (OS), probably I/O.\n>\n> I've done a quick test - I've created the table (without the 'geometry'\n> column because I don't have postgis installed), filled it with one\n> million of rows and executed 'select count(*)'. See this:\n>\n> http://pastebin.com/42cAcCqu\n>\n> This is what I get:\n>\n> ======================================================================\n> test=# SELECT pg_size_pretty(pg_relation_size('test_table'));\n> pg_size_pretty\n> ----------------\n> 1302 MB\n> (1 row)\n>\n> test=#\n> test=# \\timing on\n> Timing is on.\n> test=#\n> test=# SELECT count(*) from test_table;\n> count\n> ---------\n> 1000000\n> (1 row)\n>\n> Time: 2026,695 ms\n> ======================================================================\n>\n> so it's running the 'count(*)' in two seconds. If I run it again, I get\n> this:\n>\n> ======================================================================\n> test=# SELECT count(*) from test_table;\n> count\n> ---------\n> 1000000\n> (1 row)\n>\n> Time: 270,020 ms\n> ======================================================================\n>\n> Yes, that's 0,27 seconds. And this is *only* my workstation - Core i5 (4\n> cores), 8GB of RAM, nothing special.\n>\n> These results obviously depend on the data being available in page\n> cache. If that's not the case, PostgreSQL needs to read them from the\n> drive (and then it's basically i/o bound) - I can get about 250 MB/s\n> from my drives, so I get this:\n>\n> ======================================================================\n> test=# SELECT count(*) from test_table;\n> count\n> ---------\n> 1000000\n> (1 row)\n>\n> Time: 5088,739 ms\n> ======================================================================\n>\n> If you have slower drives, the dependency is about linear (half the\n> speed -> twice the time). So either your drives are very slow, or\n> there's something rotten.\n>\n> I still haven's seen iostat / vmstat output ... that'd tell us much more\n> about the causes.\n\ngeometry column can potentially quite wide. one thing we need to see\nis the table has any indexes -- in particular gist/gin on the\ngeometry.\n\nmerlin\n",
"msg_date": "Mon, 9 Apr 2012 17:37:54 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
},
{
"msg_contents": "On 10.4.2012 00:37, Merlin Moncure wrote:\n> On Thu, Apr 5, 2012 at 9:47 AM, Tomas Vondra <[email protected]> wrote:\n>> If you have slower drives, the dependency is about linear (half the\n>> speed -> twice the time). So either your drives are very slow, or\n>> there's something rotten.\n>>\n>> I still haven's seen iostat / vmstat output ... that'd tell us much more\n>> about the causes.\n> \n> geometry column can potentially quite wide. one thing we need to see\n> is the table has any indexes -- in particular gist/gin on the\n> geometry.\n\nYeah, but in one of the previous posts the OP posted this:\n\nrelname | relpages | reltuples\n-------------+----------+-------------\npoi_all_new | 2421133 | 6.53328e+06\n\nwhich means the table has ~ 19GB for 6.5 million rows, so it's like\n2.8GB per 1 million of rows, i.e. ~3kB per row. I've been working with 1\nmillion rows and 1.3GB of data, so it's like 50% of the expected amount.\n\nBut this does not explain why the SELECT COUNT(*) takes 364 seconds on\nthat machine. That'd mean ~8MB/s.\n\nRegarding the indexes, the the OP already posted a description of the\ntable and apparently there are these indexes:\n\nIndexes:\n \"poi_all_new_pk\" PRIMARY KEY, btree (ogc_fid)\n \"poi_all_new_flname_idx\" btree (flname)\n \"poi_all_new_geom_idx\" btree (wkb_geometry)\n \"poi_all_new_ogc_fid_idx\" btree (ogc_fid)\n \"poi_all_new_pinyin_idx\" btree (pinyin)\n\nSo none of them is GIN/GIST although some one of them is on the geometry\ncolumn.\n\nT.\n",
"msg_date": "Tue, 10 Apr 2012 01:50:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
},
{
"msg_contents": "On Mon, Apr 9, 2012 at 6:50 PM, Tomas Vondra <[email protected]> wrote:\n> On 10.4.2012 00:37, Merlin Moncure wrote:\n>> On Thu, Apr 5, 2012 at 9:47 AM, Tomas Vondra <[email protected]> wrote:\n>>> If you have slower drives, the dependency is about linear (half the\n>>> speed -> twice the time). So either your drives are very slow, or\n>>> there's something rotten.\n>>>\n>>> I still haven's seen iostat / vmstat output ... that'd tell us much more\n>>> about the causes.\n>>\n>> geometry column can potentially quite wide. one thing we need to see\n>> is the table has any indexes -- in particular gist/gin on the\n>> geometry.\n>\n> Yeah, but in one of the previous posts the OP posted this:\n>\n> relname | relpages | reltuples\n> -------------+----------+-------------\n> poi_all_new | 2421133 | 6.53328e+06\n>\n> which means the table has ~ 19GB for 6.5 million rows, so it's like\n> 2.8GB per 1 million of rows, i.e. ~3kB per row. I've been working with 1\n> million rows and 1.3GB of data, so it's like 50% of the expected amount.\n>\n> But this does not explain why the SELECT COUNT(*) takes 364 seconds on\n> that machine. That'd mean ~8MB/s.\n>\n> Regarding the indexes, the the OP already posted a description of the\n> table and apparently there are these indexes:\n>\n> Indexes:\n> \"poi_all_new_pk\" PRIMARY KEY, btree (ogc_fid)\n> \"poi_all_new_flname_idx\" btree (flname)\n> \"poi_all_new_geom_idx\" btree (wkb_geometry)\n> \"poi_all_new_ogc_fid_idx\" btree (ogc_fid)\n> \"poi_all_new_pinyin_idx\" btree (pinyin)\n>\n> So none of them is GIN/GIST although some one of them is on the geometry\n> column.\n\nhm. well, there's a duplicate index in there: ogc_fid is indexed\ntwice. how much bloat is on the table (let's see an ANALYZE VERBOSE)?\n what's the storage for this database?\n\nmerlin\n",
"msg_date": "Tue, 10 Apr 2012 08:21:56 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: about multiprocessingmassdata"
}
] |
[
{
"msg_contents": "Hello,\n\nI have an extremely bad plan for one of my colleague's query. Basically \nPostgreSQL chooses to seq scan instead of index scan. This is on:\n\nantabif=# select version();\n version\n---------------------------------------------------------------------------------------------------------- \n\n PostgreSQL 9.0.7 on amd64-portbld-freebsd8.2, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD], 64-bit\n\nThe machines has 4GB of RAM with the following config:\n- shared_buffers: 512MB\n- effective_cache_size: 2GB\n- work_mem: 32MB\n- maintenance_work_mem: 128MB\n- default_statistics_target: 300\n- temp_buffers: 64MB\n- wal_buffers: 8MB\n- checkpoint_segments = 15\n\nThe tables have been ANALYZE'd. I've put the EXPLAIN ANALYZE on:\n\n- http://www.pastie.org/3731956 : with default config\n- http://www.pastie.org/3731960 : this is with enable_seq_scan = off\n- http://www.pastie.org/3731962 : I tried to play on the various cost \nsettings but it's doesn't change anything, except setting \nrandom_page_cost to 1 (which will lead to bad plans for other queries, \nso not a solution)\n- http://www.pastie.org/3732035 : with enable_hashagg and \nenable_hashjoin to false\n\nI'm currently out of idea why PostgreSQL still chooses a bad plan for \nthis query ... any hint ?\n\nThank you,\nJulien\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Thu, 05 Apr 2012 13:47:33 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": true,
"msg_subject": "bad plan"
},
{
"msg_contents": "Julien Cigar <[email protected]> wrote:\n \n> I tried to play on the various cost settings but it's doesn't\n> change anything, except setting random_page_cost to 1 (which will\n> lead to bad plans for other queries, so not a solution)\n \nYeah, you clearly don't have the active portion of your database\nfully cached, so you don't want random_page_cost to go as low as\nseq_page_cost.\n \nHere's one suggestion to try:\n \nrandom_page_cost = 2\ncpu_tuple_cost = 0.05\n \nI have found that combination to work well for me when the level of\ncaching is about where you're seeing it. I am becoming increasingly\nof the opinion that the default for cpu_tuple_cost should be higher\nthan 0.01.\n \nPlease let us know whether that helps.\n \n-Kevin\n",
"msg_date": "Thu, 05 Apr 2012 09:26:12 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad plan"
},
{
"msg_contents": "On Thu, Apr 5, 2012 at 2:47 PM, Julien Cigar <[email protected]> wrote:\n> - http://www.pastie.org/3731956 : with default config\n> - http://www.pastie.org/3731960 : this is with enable_seq_scan = off\n\nIt looks like the join selectivity of (context_to_context_links,\nancestors) is being overestimated by almost two orders of magnitude.\nThe optimizer thinks that there are 564 rows in the\ncontext_to_context_links table for each taxon_id, while in fact for\nthis query the number is 9. To confirm that this, you can force the\nselectivity estimate to be 200x lower by adding a geo_id = geod_id\nwhere clause to the subquery.\n\nIf it does help, then the next question would be why is the estimate\nso much off. It could be either because the stats for\ncontext_to_context_links.taxon_id are wrong or because\nancestors.taxon_id(subphylum_id = 18830) is a special case. To help\nfiguring this is out, you could run the following to queries and post\nthe results:\n\nSELECT floor(log(num,2)) AS nmatch, COUNT(*) AS freq FROM (SELECT\nCOUNT(*) AS num FROM context_to_context_links GROUP BY taxon_id) AS\ndist GROUP BY 1 ORDER BY 1;\n\nSELECT floor(log(num,2)) AS nmatch, COUNT(*) AS freq FROM (SELECT\nCOUNT(*) AS num FROM context_to_context_links WHERE NOT geo_id IS NULL\nand taxon_id= ANY ( select taxon_id from rab.ancestors where\n ancestors.subphylum_id = 18830) GROUP BY taxon_id) AS dist GROUP BY\n1 ORDER BY 1;\n\nIf the second distribution has a significantly different shape then\ncross column statistics are necessary to get good plans. As it happens\nI'm working on adding this functionality to PostgreSQL and would love\nto hear more details about your use-case to understand if it would be\nsolved by this work.\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Thu, 5 Apr 2012 22:47:25 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad plan"
},
{
"msg_contents": "On 04/05/2012 21:47, Ants Aasma wrote:\n> On Thu, Apr 5, 2012 at 2:47 PM, Julien Cigar<[email protected]> wrote:\n>> - http://www.pastie.org/3731956 : with default config\n>> - http://www.pastie.org/3731960 : this is with enable_seq_scan = off\n> It looks like the join selectivity of (context_to_context_links,\n> ancestors) is being overestimated by almost two orders of magnitude.\n> The optimizer thinks that there are 564 rows in the\n> context_to_context_links table for each taxon_id, while in fact for\n> this query the number is 9. To confirm that this, you can force the\n> selectivity estimate to be 200x lower by adding a geo_id = geod_id\n> where clause to the subquery.\n\nadding a geo_id = geo_id to the subquery helped a little bit with a \ncpu_tuple_cost of 0.1: http://www.pastie.org/3738224 :\n\nwithout:\n\nIndex Scan using ltlc_taxon_id_idxoncontext_to_context_links (cost=0.00..146.93 rows=341 width=8) (actual time=0.004..0.019 rows=9 loops=736)\n\nwith geo_id = geo_id:\n\nIndex Scan using ltlc_taxon_id_idxoncontext_to_context_links (cost=0.00..148.11 rows=2 width=8) (actual time=0.004..0.020 rows=9 loops=736)\n\n\n> If it does help, then the next question would be why is the estimate\n> so much off. It could be either because the stats for\n> context_to_context_links.taxon_id are wrong or because\n> ancestors.taxon_id(subphylum_id = 18830) is a special case. To help\n> figuring this is out, you could run the following to queries and post\n> the results:\n>\n> SELECT floor(log(num,2)) AS nmatch, COUNT(*) AS freq FROM (SELECT\n> COUNT(*) AS num FROM context_to_context_links GROUP BY taxon_id) AS\n> dist GROUP BY 1 ORDER BY 1;\n>\n> SELECT floor(log(num,2)) AS nmatch, COUNT(*) AS freq FROM (SELECT\n> COUNT(*) AS num FROM context_to_context_links WHERE NOT geo_id IS NULL\n> and taxon_id= ANY ( select taxon_id from rab.ancestors where\n> ancestors.subphylum_id = 18830) GROUP BY taxon_id) AS dist GROUP BY\n> 1 ORDER BY 1;\n\nI'm sorry but I get an \"ERROR: division by zero\" for both of your queries..\n\n> If the second distribution has a significantly different shape then\n> cross column statistics are necessary to get good plans. As it happens\n> I'm working on adding this functionality to PostgreSQL and would love\n> to hear more details about your use-case to understand if it would be\n> solved by this work.\n\nThank you for your help,\nJulien\n\n> Regards,\n> Ants Aasma\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Fri, 06 Apr 2012 13:16:39 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad plan"
}
] |
[
{
"msg_contents": "I know this is a very general question. But if you guys had to specify\nsystem (could be one server or cluster), with sustainable transaction\nrate of 1.5M tps running postgresql, what configuration and hardware\nwould you be looking for ?\nThe transaction distribution there is 90% writes/updates and 10% reads.\nWe're talking 64 linux, Intel/IBM system.\n\nI'm trying to see how that compares with Oracle system.\n\nThanks.\n\n-- \nGJ\n",
"msg_date": "Thu, 5 Apr 2012 16:39:26 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": true,
"msg_subject": "heavly load system spec"
},
{
"msg_contents": "On Thu, Apr 5, 2012 at 11:39 AM, Gregg Jaskiewicz <[email protected]> wrote:\n> I know this is a very general question. But if you guys had to specify\n> system (could be one server or cluster), with sustainable transaction\n> rate of 1.5M tps running postgresql, what configuration and hardware\n> would you be looking for ?\n> The transaction distribution there is 90% writes/updates and 10% reads.\n> We're talking 64 linux, Intel/IBM system.\n>\n> I'm trying to see how that compares with Oracle system.\n\n1.5 million is a lot of tps, especially if some of them are write\ntransactions. On trivial read-only transactions (primary key lookup\non fully cached table), using a 16-core, 64-thread IBM POWER7 box,\npgbench -M prepared -S -n -T 60 -c 64 -j 64:\n\ntps = 455903.743918 (including connections establishing)\ntps = 456012.871764 (excluding connections establishing)\n\nThat box isn't quite the fastest one I've seen, but it's close.\n\nWhat hardware is Oracle running on?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 24 May 2012 16:09:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: heavly load system spec"
},
{
"msg_contents": "Gregg,\n\n* Robert Haas ([email protected]) wrote:\n> On Thu, Apr 5, 2012 at 11:39 AM, Gregg Jaskiewicz <[email protected]> wrote:\n> > I know this is a very general question. But if you guys had to specify\n> > system (could be one server or cluster), with sustainable transaction\n> > rate of 1.5M tps running postgresql, what configuration and hardware\n> > would you be looking for ?\n> > The transaction distribution there is 90% writes/updates and 10% reads.\n> > We're talking 64 linux, Intel/IBM system.\n\nJust to clarify/verify, you're looking for a system which can handle\n1.35M write transactions per second? That's quite a few and regardless\nof RDBMS, I expect you'll need quite an I/O system to handle that.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 24 May 2012 17:04:05 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: heavly load system spec"
},
{
"msg_contents": "On Thu, Apr 5, 2012 at 10:39 AM, Gregg Jaskiewicz <[email protected]> wrote:\n> I know this is a very general question. But if you guys had to specify\n> system (could be one server or cluster), with sustainable transaction\n> rate of 1.5M tps running postgresql, what configuration and hardware\n> would you be looking for ?\n> The transaction distribution there is 90% writes/updates and 10% reads.\n> We're talking 64 linux, Intel/IBM system.\n>\n> I'm trying to see how that compares with Oracle system.\n\nThat's not gonna be possible with stock postgres. You could cluster\nout the reads with a (probably cascaded) HS/SR setup but we'd still\nhave to get 90% of 1.5M tps running in a single instance. You'd hit\nvarious bottlenecks trying to get write transactions up anywhere near\nthat figure -- the walinsert lock being the worst since it is\nsomething of an upper bound on tps rates. I can't speak for Oracle\nbut I'm skeptical it's possible there in a monolithic system; if it is\nin fact possible it would cost megabucks to do it.\n\nAt the end of the day your problem needs some serious engineering and\nserious engineers. Any solution is probably going to involve a\ncluster of machines to stage the data with various ETL type jobs to\nmove it into specific services that will do the actual processing.\nThis is how all solutions that sustain very high transaction rates\nwork; you divide tasks and develop communications protocols between\nvarious systems.\n\nmerlin\n",
"msg_date": "Thu, 24 May 2012 16:24:47 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: heavly load system spec"
}
] |
[
{
"msg_contents": "Hi all,\n\nI noticed a note on the 'Priorities' wiki page[1], which talked about\nthe need for having \"a C-language function 'nice_backend(prio)' that\nrenices the calling backend to \"prio\".', and suggests posting a link\nto this list. Well, here you go:\n http://pgxn.org/dist/prioritize/\n\nThe API is a tiny bit different than what was suggested on the wiki;\nthe wiki suggested \"nice_backend()\" and \"nice_backend_super()\",\nwhereas I just consolidated those into set_backend_priority(), with\npermissions checks similar to pg_cancel_backend(). There is also\nget_backend_priority(), which should play nicely with the former\nfunction, and perhaps enable scripted queries to automatically bump\npriorities based on pg_stat_activity. See the doc[3] for more details.\n\nThe wiki says nice_backend_super() might be able to \"renice any\nbackend pid and set any priority, but is usable only by the [database]\nsuperuser\", hinting that it would be feasible to lower a backend's\npriority value (i.e. increase the scheduling priority). Unfortunately\nthis is not possible on at least OS X and Linux, where one must be\nroot to lower priority values. I haven't checked whether this module\nworks on Windows, would appreciate if someone could give it a shot\nthere.\n\nI can update the 'Priorities' wiki page in a bit.\n\nJosh\n\n[1] http://wiki.postgresql.org/wiki/Priorities\n[3] https://github.com/schmiddy/pg_prioritize/blob/master/doc/prioritize.md\n",
"msg_date": "Sat, 7 Apr 2012 10:06:28 -0700",
"msg_from": "Josh Kupershmidt <[email protected]>",
"msg_from_op": true,
"msg_subject": "get/set priority of PostgreSQL backends"
},
{
"msg_contents": "On Sat, Apr 7, 2012 at 11:06 AM, Josh Kupershmidt <[email protected]> wrote:\n> The wiki says nice_backend_super() might be able to \"renice any\n> backend pid and set any priority, but is usable only by the [database]\n> superuser\", hinting that it would be feasible to lower a backend's\n> priority value (i.e. increase the scheduling priority). Unfortunately\n> this is not possible on at least OS X and Linux, where one must be\n> root to lower priority values. I haven't checked whether this module\n> works on Windows, would appreciate if someone could give it a shot\n> there.\n\nI thought you were limited to only settings above 0 and your own\nprocesses in linux.\n",
"msg_date": "Sat, 7 Apr 2012 12:05:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: get/set priority of PostgreSQL backends"
},
{
"msg_contents": "On Sat, Apr 7, 2012 at 11:05 AM, Scott Marlowe <[email protected]> wrote:\n> On Sat, Apr 7, 2012 at 11:06 AM, Josh Kupershmidt <[email protected]> wrote:\n>> The wiki says nice_backend_super() might be able to \"renice any\n>> backend pid and set any priority, but is usable only by the [database]\n>> superuser\", hinting that it would be feasible to lower a backend's\n>> priority value (i.e. increase the scheduling priority). Unfortunately\n>> this is not possible on at least OS X and Linux, where one must be\n>> root to lower priority values. I haven't checked whether this module\n>> works on Windows, would appreciate if someone could give it a shot\n>> there.\n>\n> I thought you were limited to only settings above 0 and your own\n> processes in linux.\n\nFor non-root users, you may always only *increase* the priority values\nof your processes, and the default priority value is 0. So yes as\nnon-root, you're effectively limited to positive and increasing values\nfor setpriority(), and of course you may only alter process priorities\nrunning under the same user. I think that's what I was saying above,\nthough maybe I wasn't so clear.\n\nFor example, if you try to lower your own backend's priority with this\nfunction, you'll get a warning like this:\n\ntest=# SELECT set_backend_priority(pg_backend_pid(), -1);\nWARNING: Not possible to lower a process's priority (currently 0)\n set_backend_priority\n----------------------\n f\n(1 row)\n\n\nJosh\n",
"msg_date": "Sat, 7 Apr 2012 11:31:16 -0700",
"msg_from": "Josh Kupershmidt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: get/set priority of PostgreSQL backends"
}
] |
[
{
"msg_contents": "Hello,\n\n lets say I have such theoretical situation: big database with a lot of\ntables and fields, and a lot of users with are using different queries.\nAnd the worse - I am that data base admin ;] which has to add or remove\nindexes on table columns. As I dont know what queries are coming (users\nare writing it by them self) I dont know which columns should have\nindexes.\n My question - is here any statistics Postgres can collect to help answer\nmy question. Basically I need most often \"where\" statements of queries\n(also JOINs etc). Is here something what can help in such situation?\n\n--\nLukas\nwww.nsoft.lt\n\n",
"msg_date": "Sun, 8 Apr 2012 21:30:58 +0300",
"msg_from": "\"Lukas\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stats"
}
] |
[
{
"msg_contents": "\"Lukas\" wrote:\n> \n> lets say I have such theoretical situation: big database with a lot\n> of tables and fields, and a lot of users with are using different\n> queries. And the worse - I am that data base admin ;] which has to\n> add or remove indexes on table columns. As I dont know what queries\n> are coming (users are writing it by them self) I dont know which\n> columns should have indexes.\n> My question - is here any statistics Postgres can collect to help\n> answer my question. Basically I need most often \"where\" statements\n> of queries (also JOINs etc). Is here something what can help in\n> such situation?\n \nIf it were me, I would do two things:\n \n(1) I would add indexes which seemed likely to be useful, then see\nwhich were not being used, so I could drop them. See\npg_stat_user_indexes:\n \nhttp://www.postgresql.org/docs/current/interactive/monitoring-stats.html#MONITORING-STATS-VIEWS\n \n(2) I would log long-running queries and see what selection criteria\nthey used. See log_min_duration_statement:\n \nhttp://www.postgresql.org/docs/9.1/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN\n \nYou might also want to consider using pgFouine:\n \nhttp://pgfouine.projects.postgresql.org/\n \n-Kevin\n",
"msg_date": "Sun, 08 Apr 2012 15:36:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stats"
}
] |
[
{
"msg_contents": "hi,\n\ni had a stored procedure in ms-sql server. this stored procedure gets a\nparameter (account-id), dose about 20 queries, fills some temporary tables,\nand finally, returns a few result-sets. this stored procedure converted to\nstored function in postgresql (9.1). the result-sets are being returned\nusing refcursors. this stored function is logically, almost identical to\nthe ms-sql stored procedure. a LOT of work had been done to make\npostgresql getting close to ms-sql speed (preparing temp-tables in advance,\nusing \"analyze\" in special places inside the stored function in order to\nhint the optimizer that the temp-tables have very few records, thus\neliminating unnecessary and expansive hash-join, and a lot more..). after\nall that, the stored function is running in a reasonable speed (normally\n~60 milliseconds).\n\nnow, i run a test that simulates 20 simultaneous clients, asking for\n\"account-id\" randomly. once a client get a result, it immediately asks for\nanother one. the test last 5 seconds. i use a connection pool (with Tomcat\nweb-server). the pool is automatically increased to ~20 connections (as\nexpected). the result is postgresql dose ~60 \"account-id\"s, whereas ms-sql\ndose ~330 \"account-id\"s. postgresql shows that each \"account-id\" took about\n400-1000 msec ,which is so much slower than the ~60 msec of a single\nexecution.\n\nin a single execution postgresql may be less the twice slower than ms-sql,\nbut in 20 simultaneous clients, it's about 6 times worse. why is that?\n\nthe hardware is one 4-core xeon. 8GB of ram. the database size is just a\nfew GB's. centos-6.2.\n\ndo you think the fact that postgresql use a process per connection (instead\nof multi-threading) is inherently a weakness of postgrsql, regarding\nscale-up?\nwould it be better to limit the number of connections to something like 4,\nso that executions don't interrupt each other?\n\nthanks in advance for any help!\n\nhi,\ni had a stored procedure in ms-sql server. this stored procedure gets a parameter (account-id), dose about 20 queries, fills some temporary tables, and finally, returns a few result-sets. this stored procedure converted to stored function in postgresql (9.1). the result-sets are being returned using refcursors. this stored function is logically, almost identical to the ms-sql stored procedure. a LOT of work had been done to make postgresql getting close to ms-sql speed (preparing temp-tables in advance, using \"analyze\" in special places inside the stored function in order to hint the optimizer that the temp-tables have very few records, thus eliminating unnecessary and expansive hash-join, and a lot more..). after all that, the stored function is running in a reasonable speed (normally ~60 milliseconds).\nnow, i run a test that simulates 20 simultaneous clients, asking for \"account-id\" randomly. once a client get a result, it immediately asks for another one. the test last 5 seconds. i use a connection pool (with Tomcat web-server). the pool is automatically increased to ~20 connections (as expected). the result is postgresql dose ~60 \"account-id\"s, whereas ms-sql dose ~330 \"account-id\"s. postgresql shows that each \"account-id\" took about 400-1000 msec ,which is so much slower than the ~60 msec of a single execution. \nin a single execution postgresql may be less the twice slower than ms-sql, but in 20 simultaneous clients, it's about 6 times worse. why is that?the hardware is one 4-core xeon. 8GB of ram. the database size is just a few GB's. centos-6.2.\ndo you think the fact that postgresql use a process per connection (instead of multi-threading) is inherently a weakness of postgrsql, regarding scale-up?would it be better to limit the number of connections to something like 4, so that executions don't interrupt each other?\nthanks in advance for any help!",
"msg_date": "Thu, 12 Apr 2012 01:11:45 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "scale up (postgresql vs mssql)"
},
{
"msg_contents": "\n\nOn 04/11/2012 06:11 PM, Eyal Wilde wrote:\n> hi,\n>\n> i had a stored procedure in ms-sql server. this stored procedure gets \n> a parameter (account-id), dose about 20 queries, fills some temporary \n> tables, and finally, returns a few result-sets. this stored procedure \n> converted to stored function in postgresql (9.1). the result-sets are \n> being returned using refcursors. this stored function is logically, \n> almost identical to the ms-sql stored procedure. a LOT of work had \n> been done to make postgresql getting close to ms-sql speed (preparing \n> temp-tables in advance, using \"analyze\" in special places inside the \n> stored function in order to hint the optimizer that the temp-tables \n> have very few records, thus eliminating unnecessary and expansive \n> hash-join, and a lot more..). after all that, the stored function is \n> running in a reasonable speed (normally ~60 milliseconds).\n>\n> now, i run a test that simulates 20 simultaneous clients, asking for \n> \"account-id\" randomly. once a client get a result, it immediately asks \n> for another one. the test last 5 seconds. i use a connection pool \n> (with Tomcat web-server). the pool is automatically increased to ~20 \n> connections (as expected). the result is postgresql dose ~60 \n> \"account-id\"s, whereas ms-sql dose ~330 \"account-id\"s. postgresql \n> shows that each \"account-id\" took about 400-1000 msec ,which is so \n> much slower than the ~60 msec of a single execution.\n>\n> in a single execution postgresql may be less the twice slower than \n> ms-sql, but in 20 simultaneous clients, it's about 6 times worse. why \n> is that?\n>\n> the hardware is one 4-core xeon. 8GB of ram. the database size is just \n> a few GB's. centos-6.2.\n>\n> do you think the fact that postgresql use a process per connection \n> (instead of multi-threading) is inherently a weakness of postgrsql, \n> regarding scale-up?\n> would it be better to limit the number of connections to something \n> like 4, so that executions don't interrupt each other?\n>\n> thanks in advance for any help!\n\n\nI doubt that the process-per-connection has much effect, especially on \nLinux where process creation is extremely cheap, and you're using a \nconnection pooler anyway. The server is pretty modest, though. If you \ncan add enough RAM that you can fit the whole db into Postgres shared \nbuffers you might find things run a whole lot better. You should show us \nyour memory settings, among other things - especially shared_buffers, \ntemp_buffers and work_mem.\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Fri, 13 Apr 2012 10:32:11 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "Eyal Wilde <[email protected]> wrote:\n \n> now, i run a test that simulates 20 simultaneous clients, asking\n> for \"account-id\" randomly. once a client get a result, it\n> immediately asks for another one. the test last 5 seconds. i use\n> a connection pool (with Tomcat web-server). the pool is\n> automatically increased to ~20 connections (as expected). the\n> result is postgresql dose ~60 \"account-id\"s, whereas ms-sql dose\n> ~330 \"account-id\"s. postgresql shows that each \"account-id\" took\n> about 400-1000 msec ,which is so much slower than the ~60 msec of\n> a single execution.\n \n> the hardware is one 4-core xeon. 8GB of ram. the database size is\n> just a few GB's. centos-6.2.\n> \n> do you think the fact that postgresql use a process per connection\n> (instead of multi-threading) is inherently a weakness of\n> postgrsql, regarding scale-up?\n \nI doubt that has much to do with anything.\n \n> would it be better to limit the number of connections to something\n> like 4, so that executions don't interrupt each other?\n \nThe point where a lot of workloads hit optimal performance is with\nthe number of active connections limited to ((core count * 2) +\neffective spindle count). Determining \"active spindle count can be\ntricky (for example it is zero in a fully-cached read-only\nworkload), so it takes more information than you've given us to know\nexactly where the optimal point might be, but if it's a single\ndrive, then if you have 4 cores (not 2 cores with hyperthreading)\nyou might want to limit your connection pool to somewhere in the 8\nto 10 range. You generally should configure a connection pool to be\ntransaction based, with a request to start a transaction while all\nconnections are busy causing the request queue, with completion of a\ntransaction causing it to pull a request for the queue, if\navailable. I'm pretty sure that Tomcat's pool supports this.\n \nCould you describe your disk system and show us the result of\nrunning the query?:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n",
"msg_date": "Fri, 13 Apr 2012 09:40:01 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Wed, Apr 11, 2012 at 7:11 PM, Eyal Wilde <[email protected]> wrote:\n> in a single execution postgresql may be less the twice slower than ms-sql,\n> but in 20 simultaneous clients, it's about 6 times worse. why is that?\n>\n> the hardware is one 4-core xeon. 8GB of ram. the database size is just a few\n> GB's. centos-6.2.\n>\n> do you think the fact that postgresql use a process per connection (instead\n> of multi-threading) is inherently a weakness of postgrsql, regarding\n> scale-up?\n> would it be better to limit the number of connections to something like 4,\n> so that executions don't interrupt each other?\n\nWhat about posting some details on the tables, the 20 queries, the temp table?\n\nI'm thinking creating so many temp tables may be hurting pgsql more\nthan mssql. You might want to try unlogged temp tables, which more\nclosely resemble mssql temp tables.\n",
"msg_date": "Fri, 13 Apr 2012 12:04:39 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On 04/13/2012 08:04 AM, Claudio Freire wrote:\n> ...You might want to try unlogged temp tables, which more\n> closely resemble mssql temp tables.\n>\nIf they are permanent tables used for temporary storage then making them \nunlogged may be beneficial. But actual temporary tables *are* unlogged \nand attempting to create an unlogged temporary table will raise an error.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 13 Apr 2012 09:36:20 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Fri, Apr 13, 2012 at 1:36 PM, Steve Crawford\n<[email protected]> wrote:\n>>\n> If they are permanent tables used for temporary storage then making them\n> unlogged may be beneficial. But actual temporary tables *are* unlogged and\n> attempting to create an unlogged temporary table will raise an error.\n\nInteresting, yes, I was wondering why PG didn't make temp tables\nunlogged by default.\n\nThen, I guess, the docs[0] have to mention it. Especially due to the\nerror condition. Right?\n\n[0] http://www.postgresql.org/docs/9.1/static/sql-createtable.html\n",
"msg_date": "Fri, 13 Apr 2012 13:43:50 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On 04/13/2012 09:43 AM, Claudio Freire wrote:\n> On Fri, Apr 13, 2012 at 1:36 PM, Steve Crawford\n> <[email protected]> wrote:\n>> If they are permanent tables used for temporary storage then making them\n>> unlogged may be beneficial. But actual temporary tables *are* unlogged and\n>> attempting to create an unlogged temporary table will raise an error.\n> Interesting, yes, I was wondering why PG didn't make temp tables\n> unlogged by default.\n>\n> Then, I guess, the docs[0] have to mention it. Especially due to the\n> error condition. Right?\n>\n> [0] http://www.postgresql.org/docs/9.1/static/sql-createtable.html\n>\nWell, the fact that temporary and unlogged cannot be simultaneously \nspecified *is* documented:\n\nCREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF \nNOT EXISTS ] table_name\n\nBut it would probably be worth adding a note under the description of \ntemporary tables that they are, in fact, unlogged.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 13 Apr 2012 10:49:07 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Fri, Apr 13, 2012 at 2:49 PM, Steve Crawford\n<[email protected]> wrote:\n> Well, the fact that temporary and unlogged cannot be simultaneously\n> specified *is* documented:\n>\n> CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT\n> EXISTS ] table_name\n>\n> But it would probably be worth adding a note under the description of\n> temporary tables that they are, in fact, unlogged.\n\nYes, it was quite subtle, but you're right. I should've read the\nsyntax more closely.\n",
"msg_date": "Fri, 13 Apr 2012 14:52:45 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Wed, Apr 11, 2012 at 5:11 PM, Eyal Wilde <[email protected]> wrote:\n> hi,\n>\n> i had a stored procedure in ms-sql server. this stored procedure gets a\n> parameter (account-id), dose about 20 queries, fills some temporary tables,\n> and finally, returns a few result-sets. this stored procedure converted to\n> stored function in postgresql (9.1). the result-sets are being returned\n> using refcursors. this stored function is logically, almost identical to the\n> ms-sql stored procedure. a LOT of work had been done to make postgresql\n> getting close to ms-sql speed (preparing temp-tables in advance, using\n> \"analyze\" in special places inside the stored function in order to hint the\n> optimizer that the temp-tables have very few records, thus\n> eliminating unnecessary and expansive hash-join, and a lot more..). after\n> all that, the stored function is running in a reasonable speed (normally ~60\n> milliseconds).\n>\n> now, i run a test that simulates 20 simultaneous clients, asking for\n> \"account-id\" randomly. once a client get a result, it immediately asks for\n> another one. the test last 5 seconds. i use a connection pool (with Tomcat\n> web-server). the pool is automatically increased to ~20 connections (as\n> expected). the result is postgresql dose ~60 \"account-id\"s, whereas ms-sql\n> dose ~330 \"account-id\"s. postgresql shows that each \"account-id\" took about\n> 400-1000 msec ,which is so much slower than the ~60 msec of a single\n> execution.\n>\n> in a single execution postgresql may be less the twice slower than ms-sql,\n> but in 20 simultaneous clients, it's about 6 times worse. why is that?\n>\n> the hardware is one 4-core xeon. 8GB of ram. the database size is just a few\n> GB's. centos-6.2.\n>\n> do you think the fact that postgresql use a process per connection (instead\n> of multi-threading) is inherently a weakness of postgrsql, regarding\n> scale-up?\n> would it be better to limit the number of connections to something like 4,\n> so that executions don't interrupt each other?\n>\n> thanks in advance for any help!\n\nlet's see the procedure. I bet that the temp tables are the issue\nhere -- while they are speeding single user the i/o is stacking during\nhigh concurrency (you are also writing to system catalogs which is not\ngood).\n\nI'm sure we can get it fast but it's hard to do that without seeing the code.\n\nmerlin\n",
"msg_date": "Mon, 16 Apr 2012 08:10:18 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
}
] |
[
{
"msg_contents": "Hi all,\n\nOS: Linux 64 bit 2.6.32\nPostgreSQL 9.0.5 installed from Ubuntu packages.\n8 CPU cores\n64 GB system memory\nDatabase cluster is on raid 10 direct attached drive, using a HP p800 \ncontroller card.\n\n\nI have a system that has been having occasional performance hits, where \nthe load on the system skyrockets, all queries take longer to execute \nand a hot standby slave I have set up via streaming replication starts \nto get behind. I'm having trouble pinpointing where the exact issue is.\n\nThis morning, during our nightly backup process (where we grab a copy of \nthe data directory), we started having this same issue. The main thing \nthat I see in all of these is a high disk wait on the system. When we \nare performing 'well', the %wa from top is usually around 30%, and our \nload is around 12 - 15. This morning we saw a load 21 - 23, and an %wa \njumping between 60% and 75%.\n\nThe top process pretty much at all times is the WAL Sender Process, is \nthis normal?\n\n From what I can tell, my access patterns on the database has not \nchanged, same average number of inserts, updates, deletes, and had \nnothing on the system changed in any way. No abnormal autovacuum \nprocesses that aren't normally already running.\n\nSo what things can I do to track down what an issue is? Currently the \nsystem has returned to a 'good' state, and performance looks great. But \nI would like to know how to prevent this, as well as be able to grab \ngood stats if it does happen again in the future.\n\nHas anyone had any issues with the HP p800 controller card in a postgres \nenvironment? Is there anything that can help us maximise the performance \nto disk in this case, as it seems to be one of our major bottlenecks? I \ndo plan on moving the pg_xlog to a separate drive down the road, the \ncluster is extremely active so that will help out a ton.\n\nsome IO stats:\n\n$ iostat -d -x 5 3\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\ndev1 1.99 75.24 651.06 438.04 41668.57 8848.18 \n46.38 0.60 3.68 0.70 76.36\ndev2 0.00 0.00 653.05 513.43 41668.57 8848.18 \n43.31 2.18 4.78 0.65 76.35\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\ndev1 0.00 35.20 676.20 292.00 35105.60 5688.00 \n42.13 67.76 70.73 1.03 100.00\ndev2 0.00 0.00 671.80 295.40 35273.60 4843.20 \n41.48 73.41 76.62 1.03 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\ndev1 1.20 40.80 865.40 424.80 51355.20 8231.00 \n46.18 37.87 29.22 0.77 99.80\ndev2 0.00 0.00 867.40 465.60 51041.60 8231.00 \n44.47 38.28 28.58 0.75 99.80\n\nThanks in advance,\nBrian F\n\n\n\n\n\n\n Hi all,\n\n OS: Linux 64 bit 2.6.32\n PostgreSQL 9.0.5 installed from Ubuntu packages.\n 8 CPU cores\n 64 GB system memory\n Database cluster is on raid 10 direct attached drive, using a HP\n p800 controller card. \n\n\n I have a system that has been having occasional performance hits,\n where the load on the system skyrockets, all queries take longer to\n execute and a hot standby slave I have set up via streaming\n replication starts to get behind. I'm having trouble pinpointing\n where the exact issue is. \n\n This morning, during our nightly backup process (where we grab a\n copy of the data directory), we started having this same issue. The\n main thing that I see in all of these is a high disk wait on the\n system. When we are performing 'well', the %wa from top is usually\n around 30%, and our load is around 12 - 15. This morning we saw a\n load 21 - 23, and an %wa jumping between 60% and 75%.\n\n The top process pretty much at all times is the WAL Sender Process,\n is this normal?\n\n From what I can tell, my access patterns on the database has not\n changed, same average number of inserts, updates, deletes, and had\n nothing on the system changed in any way. No abnormal autovacuum\n processes that aren't normally already running.\n\n So what things can I do to track down what an issue is? Currently\n the system has returned to a 'good' state, and performance looks\n great. But I would like to know how to prevent this, as well as be\n able to grab good stats if it does happen again in the future. \n\n Has anyone had any issues with the HP p800 controller card in a\n postgres environment? Is there anything that can help us maximise\n the performance to disk in this case, as it seems to be one of our\n major bottlenecks? I do plan on moving the pg_xlog to a separate\n drive down the road, the cluster is extremely active so that will\n help out a ton.\n\n some IO stats:\n\n$ iostat -d -x 5 3\n Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n avgrq-sz avgqu-sz await svctm %util\n dev1 1.99 75.24 651.06 438.04 41668.57 8848.18 \n 46.38 0.60 3.68 0.70 76.36\n dev2 0.00 0.00 653.05 513.43 41668.57 8848.18 \n 43.31 2.18 4.78 0.65 76.35\n\n Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n avgrq-sz avgqu-sz await svctm %util \n dev1 0.00 35.20 676.20 292.00 35105.60 5688.00 \n 42.13 67.76 70.73 1.03 100.00\n dev2 0.00 0.00 671.80 295.40 35273.60 4843.20 \n 41.48 73.41 76.62 1.03 100.00\n\n Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n avgrq-sz avgqu-sz await svctm %util \n dev1 1.20 40.80 865.40 424.80 51355.20 8231.00 \n 46.18 37.87 29.22 0.77 99.80\n dev2 0.00 0.00 867.40 465.60 51041.60 8231.00 \n 44.47 38.28 28.58 0.75 99.80\n\n Thanks in advance,\n Brian F",
"msg_date": "Thu, 12 Apr 2012 12:41:08 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Random performance hit, unknown cause."
},
{
"msg_contents": "On Thu, Apr 12, 2012 at 3:41 PM, Brian Fehrle\n<[email protected]> wrote:\n> This morning, during our nightly backup process (where we grab a copy of the\n> data directory), we started having this same issue. The main thing that I\n> see in all of these is a high disk wait on the system. When we are\n> performing 'well', the %wa from top is usually around 30%, and our load is\n> around 12 - 15. This morning we saw a load 21 - 23, and an %wa jumping\n> between 60% and 75%.\n>\n> The top process pretty much at all times is the WAL Sender Process, is this\n> normal?\n\nSounds like vacuum to me.\n",
"msg_date": "Thu, 12 Apr 2012 15:49:43 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random performance hit, unknown cause."
},
{
"msg_contents": "Claudio Freire <[email protected]> wrote:\n> On Thu, Apr 12, 2012 at 3:41 PM, Brian Fehrle\n> <[email protected]> wrote:\n>> This morning, during our nightly backup process (where we grab a\n>> copy of the data directory), we started having this same issue.\n>> The main thing that I see in all of these is a high disk wait on\n>> the system. When we are performing 'well', the %wa from top is\n>> usually around 30%, and our load is around 12 - 15. This morning\n>> we saw a load 21 - 23, and an %wa jumping between 60% and 75%.\n>>\n>> The top process pretty much at all times is the WAL Sender\n>> Process, is this normal?\n> \n> Sounds like vacuum to me.\n \nMore particularly, it seems consistent with autovacuum finding a\nlarge number of tuples which had reached their freeze threshold. \nRewriting the tuple in place with a frozen xmin is a WAL-logged\noperation.\n \n-Kevin\n",
"msg_date": "Thu, 12 Apr 2012 14:52:11 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random performance hit, unknown cause."
},
{
"msg_contents": "On Thu, Apr 12, 2012 at 3:41 PM, Brian Fehrle\n<[email protected]> wrote:\n> Is there anything that can help us maximise the performance to disk in this\n> case, as it seems to be one of our major bottlenecks?\n\nIf it's indeed autovacuum, like I think it is, you can try limiting it\nwith pg's autovacuum_cost_delay params.\n",
"msg_date": "Thu, 12 Apr 2012 16:59:10 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random performance hit, unknown cause."
},
{
"msg_contents": "Interesting, that is very likely.\n\nIn this system I have a table that is extremely active. On a 'normal' \nday, the autovacuum process takes about 7 hours to complete on this \ntable, and once it's complete, the system performs an autoanalyze on the \ntable, finding that we have millions of new dead rows. Once this \nhappens, it kicks off the autovacuum again, so we basically always have \na vacuum running on this table at any given time.\n\nIf I were to tweak the autovacuum_vacuum_cost_delay parameter, what \nwould that be doing? Would it be limiting what the current autovacuum is \nallowed to do? Or does it simply space out the time between autovacuum \nruns? In my case, with 7 hour long autovacuums (sometimes 14 hours), a \nfew milliseconds between each vacuum wouldn't mean anything to me.\n\nIf that parameter does limit the amount of work autovacuum can do, It \nmay cause the system to perform better at that time, but would prolong \nthe length of the autovacuum right? That's an issue I'm already having \nissue with, and wouldn't want to make the autovacuum any longer if I \ndon't need to.\n\n- Brian F\n\n\nOn 04/12/2012 01:52 PM, Kevin Grittner wrote:\n> Claudio Freire<[email protected]> wrote:\n>> On Thu, Apr 12, 2012 at 3:41 PM, Brian Fehrle\n>> <[email protected]> wrote:\n>>> This morning, during our nightly backup process (where we grab a\n>>> copy of the data directory), we started having this same issue.\n>>> The main thing that I see in all of these is a high disk wait on\n>>> the system. When we are performing 'well', the %wa from top is\n>>> usually around 30%, and our load is around 12 - 15. This morning\n>>> we saw a load 21 - 23, and an %wa jumping between 60% and 75%.\n>>>\n>>> The top process pretty much at all times is the WAL Sender\n>>> Process, is this normal?\n>> Sounds like vacuum to me.\n>\n> More particularly, it seems consistent with autovacuum finding a\n> large number of tuples which had reached their freeze threshold.\n> Rewriting the tuple in place with a frozen xmin is a WAL-logged\n> operation.\n>\n> -Kevin\n\n",
"msg_date": "Thu, 12 Apr 2012 15:31:21 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Random performance hit, unknown cause."
},
{
"msg_contents": "Brian Fehrle <[email protected]> wrote:\n \n> In this system I have a table that is extremely active. On a\n> 'normal' day, the autovacuum process takes about 7 hours to\n> complete on this table, and once it's complete, the system\n> performs an autoanalyze on the table, finding that we have\n> millions of new dead rows. Once this happens, it kicks off the\n> autovacuum again, so we basically always have a vacuum running on\n> this table at any given time.\n> \n> If I were to tweak the autovacuum_vacuum_cost_delay parameter,\n> what would that be doing?\n \nThat controls how long an autovacuum worker naps after it has done\nenough work to hit the autovacuum_cost_limit. As tuning knobs go,\nthis one is pretty coarse.\n \n> Would it be limiting what the current autovacuum is allowed to do?\n \nNo, just how fast it does it.\n \n> Or does it simply space out the time between autovacuum runs?\n \nNot that either; it's part of pacing the work of a run.\n \n> In my case, with 7 hour long autovacuums (sometimes 14 hours), a \n> few milliseconds between each vacuum wouldn't mean anything to me.\n \nGenerally, I find that the best way to tune it is to pick 10ms to\n20ms for autovacuum_cost_delay, and adjust adjust\nautovacuum_cost_limit to tune from there. A small change in the\nformer can cause a huge change in pacing; the latter is better for\nfine-tuning.\n \n> It may cause the system to perform better at that time, but would\n> prolong the length of the autovacuum right?\n \nRight.\n \n-Kevin\n",
"msg_date": "Thu, 12 Apr 2012 17:00:13 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random performance hit, unknown cause."
},
{
"msg_contents": "Check your pagecache settings, when doing heavy io writes of a large file you can basically force a linux box to completely stall. At some point once the pagecache has reached it's limit it'll force all IO to go sync basically from my understanding. We are still fighting with this but lots of changes in RH6 seem to address of lot of these issues.\r\n\r\ngrep -i dirty /proc/meminfo\r\ncat /proc/sys/vm/\r\ncat /proc/sys/vm/nr_pdflush_threads\r\n\r\nOnce the dirty pages reaches a really large size and the limit of pagecache your system should experience a pretty abrupt drop in performance. You should be able to avoid this by using sync writes, but we haven't had a chance to completely isolate and address this issue.\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Claudio Freire\r\nSent: Thursday, April 12, 2012 1:50 PM\r\nTo: Brian Fehrle\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Random performance hit, unknown cause.\r\n\r\nOn Thu, Apr 12, 2012 at 3:41 PM, Brian Fehrle <[email protected]> wrote:\r\n> This morning, during our nightly backup process (where we grab a copy \r\n> of the data directory), we started having this same issue. The main \r\n> thing that I see in all of these is a high disk wait on the system. \r\n> When we are performing 'well', the %wa from top is usually around 30%, \r\n> and our load is around 12 - 15. This morning we saw a load 21 - 23, \r\n> and an %wa jumping between 60% and 75%.\r\n>\r\n> The top process pretty much at all times is the WAL Sender Process, is \r\n> this normal?\r\n\r\nSounds like vacuum to me.\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis email is confidential and subject to important disclaimers and\r\nconditions including on offers for the purchase or sale of\r\nsecurities, accuracy and completeness of information, viruses,\r\nconfidentiality, legal privilege, and legal entity disclaimers,\r\navailable at http://www.jpmorgan.com/pages/disclosures/email. \n",
"msg_date": "Wed, 18 Apr 2012 16:54:22 +0000",
"msg_from": "\"Strange, John W\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Random performance hit, unknown cause."
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to understand why the following query execution don't use \nany fulltext indexes\nand takes more than 300s (using lot of temporary files):\n\n EXPLAIN ANALYZE SELECT hierarchy.id\n FROM hierarchy\n JOIN fulltext ON fulltext.id = hierarchy.id,\n TO_TSQUERY('whatever') query1,\n TO_TSQUERY('whatever') query2\n WHERE (query1 @@ nx_to_tsvector(fulltext.fulltext)) OR (query2 @@ \nnx_to_tsvector(fulltext.fulltext_title));\n\nThe query plan is here:\n http://explain.depesz.com/s/YgP\n\nWhile if I replace the query2 by query1 in the second clause:\n\n EXPLAIN ANALYZE SELECT hierarchy.id\n FROM hierarchy\n JOIN fulltext ON fulltext.id = hierarchy.id,\n TO_TSQUERY('whatever') query1,\n TO_TSQUERY('whatever') query2\n WHERE (query1 @@ nx_to_tsvector(fulltext.fulltext)) OR (query1 @@ \nnx_to_tsvector(fulltext.fulltext_title));\n\nIt is 5 order of magniude faster (15ms) using the gin indexes:\n http://explain.depesz.com/s/RLa\n\nThe nx_to_tsvector is an immutable function with the following code:\n SELECT TO_TSVECTOR('english', SUBSTR($1, 1, 250000))\n\nHere is the list of indexes:\n hierarchy: \"hierarchy_pk\" PRIMARY KEY, btree (id)\n fulltext: \"fulltext_fulltext_idx\" gin \n(nx_to_tsvector(fulltext::character varying))\n fulltext: \"fulltext_fulltext_title_idx\" gin \n(nx_to_tsvector(fulltext_title::character varying))\n\nfulltext and fulltext_title are text type.\n\nAnd some PostgreSQL configuration:\n PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu\n shared_buffers: 4GB\n effective_cache_size: 10GB\n work_mem: 20MB\n\nThanks for your work and enlightenment\n\nben\n",
"msg_date": "Fri, 13 Apr 2012 00:09:23 +0200",
"msg_from": "Benoit Delbosc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow fulltext query plan"
},
{
"msg_contents": "Benoit Delbosc <[email protected]> writes:\n> EXPLAIN ANALYZE SELECT hierarchy.id\n> FROM hierarchy\n> JOIN fulltext ON fulltext.id = hierarchy.id,\n> TO_TSQUERY('whatever') query1,\n> TO_TSQUERY('whatever') query2\n> WHERE (query1 @@ nx_to_tsvector(fulltext.fulltext)) OR (query2 @@ \n> nx_to_tsvector(fulltext.fulltext_title));\n\nIs there a reason why you're writing the query in such a\nnon-straightforward way, rather than just\n\n EXPLAIN ANALYZE SELECT hierarchy.id\n FROM hierarchy\n JOIN fulltext ON fulltext.id = hierarchy.id\n WHERE (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext))\n OR (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext_title));\n\n?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Apr 2012 18:25:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow fulltext query plan "
},
{
"msg_contents": "On 13/04/2012 00:25, Tom Lane wrote:\n> Benoit Delbosc<[email protected]> writes:\n>> EXPLAIN ANALYZE SELECT hierarchy.id\n>> FROM hierarchy\n>> JOIN fulltext ON fulltext.id = hierarchy.id,\n>> TO_TSQUERY('whatever') query1,\n>> TO_TSQUERY('whatever') query2\n>> WHERE (query1 @@ nx_to_tsvector(fulltext.fulltext)) OR (query2 @@\n>> nx_to_tsvector(fulltext.fulltext_title));\n> Is there a reason why you're writing the query in such a\n> non-straightforward way, rather than just\n>\n> EXPLAIN ANALYZE SELECT hierarchy.id\n> FROM hierarchy\n> JOIN fulltext ON fulltext.id = hierarchy.id\n> WHERE (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext))\n> OR (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext_title));\n>\n> ?\n>\nThis query is written by a framework, also I thought that is a common \npattern that can be found in the documentation:\n\n http://www.postgresql.org/docs/9.1/interactive/textsearch-controls.html\n\nif you think this a wrong way to do it then I will try to fix the framework.\n\nbtw your version takes 15ms :)\n\nThanks\n\nben\n",
"msg_date": "Fri, 13 Apr 2012 00:56:11 +0200",
"msg_from": "Benoit Delbosc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow fulltext query plan"
},
{
"msg_contents": "Benoit Delbosc <[email protected]> writes:\n> On 13/04/2012 00:25, Tom Lane wrote:\n>> Is there a reason why you're writing the query in such a\n>> non-straightforward way, rather than just\n>> \n>> EXPLAIN ANALYZE SELECT hierarchy.id\n>> FROM hierarchy\n>> JOIN fulltext ON fulltext.id = hierarchy.id\n>> WHERE (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext))\n>> OR (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext_title));\n\n> This query is written by a framework, also I thought that is a common \n> pattern that can be found in the documentation:\n> http://www.postgresql.org/docs/9.1/interactive/textsearch-controls.html\n\nWell, \"common pattern\" would be stretching it. Anyway I've concluded\nthat this is in fact a planner bug. There will be a fix in 9.2, but I'm\nnot going to take the risk of back-patching it, so you might want to\nthink about changing that framework.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Apr 2012 21:16:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow fulltext query plan "
}
] |
[
{
"msg_contents": "I have a big table ~15M records called entities. I want to find top 5\nentities matching \"hockey\" in their name.\n\nI have a Full text index built for that, which is used :\ngin_ix_entity_full_text_search_name, which indexes the name.\n\nQuery\n\n SELECT \"entities\".*,\n ts_rank(to_tsvector('english', \"entities\".\"name\"::text),\n to_tsquery('english', 'hockey'::text)) AS \"rank0.48661998202865475\"\n FROM \"entities\" \n WHERE \"entities\".\"place\" = 'f' \n AND (to_tsvector('english', \"entities\".\"name\"::text) @@\nto_tsquery('english', 'hockey'::text)) \n ORDER BY \"rank0.48661998202865475\" DESC LIMIT 5\nDuration 25,623 ms\n\nQUERY PLAN\n\n Limit (cost=4447.28..4447.29 rows=5 width=3116) (actual\ntime=18509.274..18509.282 rows=5 loops=1)\n -> Sort (cost=4447.28..4448.41 rows=2248 width=3116) (actual\ntime=18509.271..18509.273 rows=5 loops=1)\n Sort Key: (ts_rank(to_tsvector('english'::regconfig, (name)::text),\n'''hockey'''::tsquery))\n Sort Method: top-N heapsort Memory: 19kB\n -> Bitmap Heap Scan on entities (cost=43.31..4439.82 rows=2248\nwidth=3116) (actual time=119.003..18491.408 rows=2533 loops=1)\n Recheck Cond: (to_tsvector('english'::regconfig, (name)::text) @@\n'''hockey'''::tsquery)\n Filter: (NOT place)\n -> Bitmap Index Scan on gin_ix_entity_full_text_search_name \n(cost=0.00..43.20 rows=2266 width=0) (actual time=74.093..74.093 rows=2593\nloops=1)\n Index Cond: (to_tsvector('english'::regconfig,\n(name)::text) @@ '''hockey'''::tsquery)\n Total runtime: 18509.381 ms\n(10 rows)\n\nIs it because of my boolean condition (not Place?) If so, I should add it to\nmy index and I should get a very fast query? Or is it the sorting condition\nwhich makes it very long?\n\nThanks helping me understand the Query plan and how to fix my 25 seconds\nquery!\n\n\nHere are my DB parameters . It is an online DB hosted by Heroku, on Amazon\nservices. They describe it as having 1.7GB of ram, 1 processing unit and a\nDB of max 1TB.\n\n name | current_setting \n\nversion | PostgreSQL 9.0.7 on i486-pc-linux-gnu,\ncompiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\narchive_command | test -f\n/etc/postgresql/9.0/main/wal-e.d/ARCHIVING_OFF || envdir\n/etc/postgresql/9.0/resource29857_heroku_com/wal-e.d/env wal-e wal-push %p\narchive_mode | on\narchive_timeout | 1min\ncheckpoint_completion_target | 0.7\ncheckpoint_segments | 40\nclient_min_messages | notice\ncpu_index_tuple_cost | 0.001\ncpu_operator_cost | 0.0005\ncpu_tuple_cost | 0.003\neffective_cache_size | 1530000kB\nhot_standby | on\nlc_collate | en_US.UTF-8\nlc_ctype | en_US.UTF-8\nlisten_addresses | *\nlog_checkpoints | on\nlog_destination | syslog\nlog_line_prefix | %u [YELLOW] \nlog_min_duration_statement | 50ms\nlog_min_messages | notice\nlogging_collector | on\nmaintenance_work_mem | 64MB\nmax_connections | 500\nmax_prepared_transactions | 500\nmax_stack_depth | 2MB\nmax_standby_archive_delay | -1\nmax_standby_streaming_delay | -1\nmax_wal_senders | 10\nport | \nrandom_page_cost | 2\nserver_encoding | UTF8\nshared_buffers | 415MB\nssl | on\nsyslog_ident | resource29857_heroku_com\nTimeZone | UTC\nwal_buffers | 8MB\nwal_keep_segments | 127\nwal_level | hot_standby\nwork_mem | 100MB\n(39 rows)\n\n\nI tried playing with the work_mem, setting it as high as 1.5GB, with no\nsuccess. I believe it is heroku reading speed that is abysmal in this case.\nBut I'd like to confirm that. Or is it the function that I'm calling in my\nSELECT clause?\n\nThanks for help\nAlso posted on\nhttp://dba.stackexchange.com/questions/16437/postgresql-help-optimizing-sql-performance-full-text-search \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-Help-Optimizing-performance-full-text-search-on-Heroku-tp5638777p5638777.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 13 Apr 2012 09:14:47 -0700 (PDT)",
"msg_from": "xlash <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL - Help Optimizing performance - full text search on Heroku"
},
{
"msg_contents": "On 13.4.2012 18:14, xlash wrote:\n> I have a big table ~15M records called entities. I want to find top 5\n> entities matching \"hockey\" in their name.\n\nNumber of rows in not a very useful metric - if might be 15 MBs or 15\nGBs, depending on the structure. We need to know at least this:\n\n select relpages, reltuples from pg_class where relname = 'entities';\n\nand similarly for the index (just replace the relation name).\n\n> QUERY PLAN\n> \n> Limit (cost=4447.28..4447.29 rows=5 width=3116) (actual\n> time=18509.274..18509.282 rows=5 loops=1)\n> -> Sort (cost=4447.28..4448.41 rows=2248 width=3116) (actual\n> time=18509.271..18509.273 rows=5 loops=1)\n> Sort Key: (ts_rank(to_tsvector('english'::regconfig, (name)::text),\n> '''hockey'''::tsquery))\n> Sort Method: top-N heapsort Memory: 19kB\n> -> Bitmap Heap Scan on entities (cost=43.31..4439.82 rows=2248\n> width=3116) (actual time=119.003..18491.408 rows=2533 loops=1)\n> Recheck Cond: (to_tsvector('english'::regconfig, (name)::text) @@\n> '''hockey'''::tsquery)\n> Filter: (NOT place)\n> -> Bitmap Index Scan on gin_ix_entity_full_text_search_name \n> (cost=0.00..43.20 rows=2266 width=0) (actual time=74.093..74.093 rows=2593\n> loops=1)\n> Index Cond: (to_tsvector('english'::regconfig,\n> (name)::text) @@ '''hockey'''::tsquery)\n> Total runtime: 18509.381 ms\n> (10 rows)\n\nI recommend services like explain.depesz.com instead of posting the\nplans directly (which causes wrapping etc. so the plans are barely\nreadable).\n\nI've posted the plan here: http://explain.depesz.com/s/Jr7\n\n> Is it because of my boolean condition (not Place?) If so, I should add it to\n> my index and I should get a very fast query? Or is it the sorting condition\n> which makes it very long?\n\nNo. The step that consumes most of the time is the \"bitmap heap scan\"\n(see the \"actual time\" difference, it's nicely visible from the plan at\nexplain.depesz.com).\n\nLet me briefly explain how the bitmap index scan works (and how that\napplies to this query). First, the index is scanned and a bitmap of\npages (tuples) that need to be read from the relation is built. This is\nthe \"bitmap index scan\" node in the plan.\n\nThen, the bitmap is used to read pages from the relation - this is\nnecessary to check visibility of the rows (this is not stored in the\nindex) and get the complete rows if needed.\n\nIf you check the plan you'll see the first stage takes 74 ms (so it's\nnegligible) but scanning the relation takes 18491 ms (like 99.9% of the\nruntime).\n\nThe sorting clearly is not the culprit as it takes ~ 17 ms.\n\nAnd the 'NOT place' condition does not make much difference - the bitmap\nindex scan returns 2593 rows and the recheck produces 2533 rows, so ~2%\nrows were removed (not necessarily due to the 'NOT place' condition). So\nit's highly unlikely adding this column to the index will improve the\nperformance.\n\nSo my guess is that you're I/O bound - reading the table causes so much\nrandom I/O the machine can't handle that. You can verify this by\nwatching \"iostat -x\" when running the query. My bet is the device will\nbe ~100% utilized.\n\n> \n> Thanks helping me understand the Query plan and how to fix my 25 seconds\n> query!\n> \n> \n> Here are my DB parameters . It is an online DB hosted by Heroku, on Amazon\n> services. They describe it as having 1.7GB of ram, 1 processing unit and a\n> DB of max 1TB.\n\nWell, AWS instances are known to have I/O issues (which is expected when\nrunning database with virtualized devices).\n\nNow sure what to recommend here - either use an instance with more RAM\n(so that the whole 'entities' relation is cached) or split the table\nsomehow so that less data needs to be read from the disk. E.g. vertical\npartitioning i.e. splitting the table vertically into two parts or\nsomething like that.\n\nTomas\n",
"msg_date": "Fri, 13 Apr 2012 19:09:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL - Help Optimizing performance - full text\n\tsearch on Heroku"
}
] |
[
{
"msg_contents": "hi,\n\nthanks a lot to all of you for your help.\n\n(i'm sorry i did not know how to reply to a certain message)\n\ni found that the best number of active connections is indeed 8-10. with\n8-10 active connections postgresql did ~170 \"account-id\"s. this is still\nonly half of what mssql did, but it now makes sence, considering that mssql\nworks close to twice faster.\n\ni \"played\" with work_mem, shared_buffers, temp_buffers. i ran the tests\nwith both of the following configurations, but no significant difference\nwas found.\n\nthanks again for any more help.\n\n\n\"version\";\"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc\n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"\n\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\n\"lc_collate\";\"en_US.UTF-8\"\n\"lc_ctype\";\"en_US.UTF-8\"\n\"listen_addresses\";\"*\"\n\"log_destination\";\"stderr\"\n\"log_line_prefix\";\"%t \"\n\"logging_collector\";\"on\"\n\"max_connections\";\"100\"\n \"max_stack_depth\";\"2MB\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"32MB\"\n\"TimeZone\";\"Israel\"\n\"wal_buffers\";\"1MB\"\n\n\n\n\"version\";\"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc\n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"\n\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\n\"lc_collate\";\"en_US.UTF-8\"\n\"lc_ctype\";\"en_US.UTF-8\"\n\"listen_addresses\";\"*\"\n\"log_destination\";\"stderr\"\n\"log_line_prefix\";\"%t \"\n\"logging_collector\";\"on\"\n\"max_connections\";\"100\"\n \"max_stack_depth\";\"2MB\"\n\"port\";\"5432\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"3GB\"\n\"temp_buffers\";\"64MB\"\n\"TimeZone\";\"Israel\"\n\"wal_buffers\";\"16MB\"\n\"work_mem\";\"20MB\"\n\nhi,thanks a lot to all of you for your help.(i'm sorry i did not know how to reply to a certain message)i found that the best number of active connections is indeed 8-10. with 8-10 active connections postgresql did ~170 \"account-id\"s. this is still only half of what mssql did, but it now makes sence, considering that mssql works close to twice faster.\ni \"played\" with work_mem, shared_buffers, temp_buffers. i ran the tests with both of the following configurations, but no significant difference was found.thanks again for any more help.\n\"version\";\"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\"lc_collate\";\"en_US.UTF-8\"\"lc_ctype\";\"en_US.UTF-8\"\"listen_addresses\";\"*\"\n\"log_destination\";\"stderr\"\"log_line_prefix\";\"%t \"\"logging_collector\";\"on\"\"max_connections\";\"100\"\n\n\"max_stack_depth\";\"2MB\"\"server_encoding\";\"UTF8\"\"shared_buffers\";\"32MB\"\"TimeZone\";\"Israel\"\"wal_buffers\";\"1MB\"\n\"version\";\"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\"lc_collate\";\"en_US.UTF-8\"\"lc_ctype\";\"en_US.UTF-8\"\"listen_addresses\";\"*\"\n\"log_destination\";\"stderr\"\"log_line_prefix\";\"%t \"\"logging_collector\";\"on\"\"max_connections\";\"100\"\n\n\"max_stack_depth\";\"2MB\"\"port\";\"5432\"\"server_encoding\";\"UTF8\"\"shared_buffers\";\"3GB\"\"temp_buffers\";\"64MB\"\n\"TimeZone\";\"Israel\"\"wal_buffers\";\"16MB\"\"work_mem\";\"20MB\"",
"msg_date": "Sun, 15 Apr 2012 15:43:27 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On 4/15/2012 7:43 AM, Eyal Wilde wrote:\n> hi,\n>\n> thanks a lot to all of you for your help.\n>\n> (i'm sorry i did not know how to reply to a certain message)\n>\n> i found that the best number of active connections is indeed 8-10. with\n> 8-10 active connections postgresql did ~170 \"account-id\"s. this is still\n> only half of what mssql did, but it now makes sence, considering that\n> mssql works close to twice faster.\n>\n> i \"played\" with work_mem, shared_buffers, temp_buffers. i ran the tests\n> with both of the following configurations, but no significant difference\n> was found.\n>\n> thanks again for any more help.\n>\n\nWe'd need to know if you are CPU bound or IO bound before we can help. \nWatch \"vmstat 2\" while the server is busy (and maybe post a few rows).\n\n\n> i had a stored procedure in ms-sql server. this stored procedure gets a parameter (account-id), dose about 20 queries, fills some temporary tables, and finally, returns a few result-sets. this stored procedure converted to stored function in postgresql (9.1). the result-sets are being returned using refcursors. this stored function is logically, almost identical to the ms-sql stored procedure.\n\nI think that may be a problem. Treating PG like its mssql wont work \nwell I'd bet. things that work well in one database may not work well \nin another.\n\nInstead of temp tables, have you tried derived tables? Instead of:\n\ninsert into temptable select * from stuff;\nselect * from temptable;\n\ntry something like:\n\nselect * from (\n select * from stuff\n) as subq\n\nAnother thing you might try to remove temp tables is to use views.\n\nI dont know if it'll be faster, I'm just guessing. Pulling out \nindividual parts and running \"explain analyze\" on them will help you \nfind the slow ones. Finding which is faster (temp tables, derived \ntables, or views) might help you deiced what needs to be re-written.\n\nAlso, I'm not sure how well PG does \"return multiple refcursors\". there \nmay be a lot of round trips from client to server to fetch next. How \nhard would it be to re-do your single procedure that returns a bunch of \nrefcursors into multiple procedures each returning one resultset?\n\nOr maybe it would be something you can speed test to see if it would \neven make a difference.\n\n-Andy\n",
"msg_date": "Mon, 16 Apr 2012 10:43:20 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On 15/04/12 13:43, Eyal Wilde wrote:\n>\n> \"version\";\"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc\n> (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"\n\nYou've probably checked this, but if not it's worth making sure your \ndisk I/O is roughly equivalent for the two operating-systems. It might \nbe poor drivers on the CentOs system.\n\nDo you have two equivalent machines, or are you dual-booting?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 16 Apr 2012 16:55:46 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n> Benoit Delbosc<[email protected]> writes:\n>> On 13/04/2012 00:25, Tom Lane wrote:\n>>> Is there a reason why you're writing the query in such a\n>>> non-straightforward way, rather than just\n>>>\n>>> EXPLAIN ANALYZE SELECT hierarchy.id\n>>> FROM hierarchy\n>>> JOIN fulltext ON fulltext.id = hierarchy.id\n>>> WHERE (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext))\n>>> OR (TO_TSQUERY('whatever') @@ nx_to_tsvector(fulltext.fulltext_title));\n>>\n>> This query is written by a framework, also I thought that is a common\n>> pattern that can be found in the documentation:\n>> http://www.postgresql.org/docs/9.1/interactive/textsearch-controls.html\n>\n> Well, \"common pattern\" would be stretching it. Anyway I've concluded\n> that this is in fact a planner bug. There will be a fix in 9.2, but I'm\n> not going to take the risk of back-patching it, so you might want to\n> think about changing that framework.\n\nFYI the reason why we have queries that look like what Benoit\ndescribes is that we often use the query alias twice, once for\nTO_TSVECTOR and once for TS_RANK_CD, for instance:\n\n SELECT hierarchy.id, TS_RANK_CD(fulltext, query1, 32) as nxscore\n FROM hierarchy\n JOIN fulltext ON fulltext.id = hierarchy.id,\n TO_TSQUERY('whatever') query1,\n TO_TSQUERY('whatever') query2\n WHERE (query1 @@ nx_to_tsvector(fulltext.fulltext)) OR (query2 @@\nnx_to_tsvector(fulltext.fulltext_title))\n ORDER BY nxscore DESC;\n\n(as is also described in the doc mentioned btw).\n\nFlorent\n\n-- \nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source, Java EE based, Enterprise Content Management (ECM)\nhttp://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87\n",
"msg_date": "Mon, 16 Apr 2012 15:11:47 +0200",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow fulltext query plan"
}
] |
[
{
"msg_contents": "Hello group!\n\nI have query like this:\n\nSELECT\n employments.candidate_id AS candidate_id,\n SUM(TS_RANK(employers.search_vector, TO_TSQUERY('simple', 'One:* |\nTwo:* | Three:* | Four:*'), 2)) AS ts_rank\nFROM\n employments\nINNER JOIN\n employers ON employments.employer_id = employers.id\nAND\n employers.search_vector @@ TO_TSQUERY('simple', 'One:* | Two:* |\nThree:* | Four:*')\nGROUP BY\n candidate_id;\n\nAnd it results with this:\n\nhttp://explain.depesz.com/s/jLM\n\nThe JOIN between employments and employers is the culprit. I'm unable\nto get rid of the seq scan, and setting enable_seqscan to off makes\nthings even worse.\n\nIs there any way to get rid of this JOIN?\n\nWhat info should I post to debug this easier?\n\nThanks!\n",
"msg_date": "Mon, 16 Apr 2012 07:02:13 -0700 (PDT)",
"msg_from": "Tomek Walkuski <[email protected]>",
"msg_from_op": true,
"msg_subject": "SeqScan with full text search"
},
{
"msg_contents": "On 16.4.2012 16:02, Tomek Walkuski wrote:\n> Hello group!\n> \n> I have query like this:\n> \n> SELECT\n> employments.candidate_id AS candidate_id,\n> SUM(TS_RANK(employers.search_vector, TO_TSQUERY('simple', 'One:* |\n> Two:* | Three:* | Four:*'), 2)) AS ts_rank\n> FROM\n> employments\n> INNER JOIN\n> employers ON employments.employer_id = employers.id\n> AND\n> employers.search_vector @@ TO_TSQUERY('simple', 'One:* | Two:* |\n> Three:* | Four:*')\n> GROUP BY\n> candidate_id;\n> \n> And it results with this:\n> \n> http://explain.depesz.com/s/jLM\n> \n> The JOIN between employments and employers is the culprit. I'm unable\n> to get rid of the seq scan, and setting enable_seqscan to off makes\n> things even worse.\n> \n> Is there any way to get rid of this JOIN?\n\nWell, it's clearly the seqscan that takes most time, and it seems that\nyou really need to scan the whole table because you're asking 'for each\nemployment of all the candidates ...'\n\nSo you really need to scan all 1.6 million rows to get the result. And\nseqscan is the best way to do that.\n\nI don't see a way to remove the join and/or seqscan, unless you want to\nkeep a 'materialized view' maintained by a trigger or something ...\n\nAnother option is to make the employment table as small as possible\n(e.g. removing columns that are not needed etc.) so that the seqscan is\nfaster.\n\n> \n> What info should I post to debug this easier?\n\n1) structures of the tables\n2) what amount of data are we talking about\n3) was this the first run (with cold caches) or have you run that\n several times?\n4) basic system info (RAM, CPU, shared_buffers etc.)\n\nTomas\n",
"msg_date": "Mon, 16 Apr 2012 18:19:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SeqScan with full text search"
},
{
"msg_contents": "On Mon, Apr 16, 2012 at 9:02 AM, Tomek Walkuski\n<[email protected]> wrote:\n> Hello group!\n>\n> I have query like this:\n>\n> SELECT\n> employments.candidate_id AS candidate_id,\n> SUM(TS_RANK(employers.search_vector, TO_TSQUERY('simple', 'One:* |\n> Two:* | Three:* | Four:*'), 2)) AS ts_rank\n> FROM\n> employments\n> INNER JOIN\n> employers ON employments.employer_id = employers.id\n> AND\n> employers.search_vector @@ TO_TSQUERY('simple', 'One:* | Two:* |\n> Three:* | Four:*')\n> GROUP BY\n> candidate_id;\n>\n> And it results with this:\n>\n> http://explain.depesz.com/s/jLM\n>\n> The JOIN between employments and employers is the culprit. I'm unable\n> to get rid of the seq scan, and setting enable_seqscan to off makes\n> things even worse.\n>\n> Is there any way to get rid of this JOIN?\n\nget rid of the join? the seq scan is natural since it seems like\nyou're querying the whole table, right? maybe if you explained the\nproblem in more detail (especially why you think the seq scan might\nnot be required)?\n\nmerlin\n",
"msg_date": "Mon, 16 Apr 2012 14:38:10 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SeqScan with full text search"
},
{
"msg_contents": "2012/4/16 Tomek Walkuski <[email protected]>\n\n> Hello group!\n>\n> I have query like this:\n>\n> SELECT\n> employments.candidate_id AS candidate_id,\n> SUM(TS_RANK(employers.search_vector, TO_TSQUERY('simple', 'One:* |\n> Two:* | Three:* | Four:*'), 2)) AS ts_rank\n> FROM\n> employments\n> INNER JOIN\n> employers ON employments.employer_id = employers.id\n> AND\n> employers.search_vector @@ TO_TSQUERY('simple', 'One:* | Two:* |\n> Three:* | Four:*')\n> GROUP BY\n> candidate_id;\n>\n> And it results with this:\n>\n> http://explain.depesz.com/s/jLM\n>\n> The JOIN between employments and employers is the culprit. I'm unable\n> to get rid of the seq scan, and setting enable_seqscan to off makes\n> things even worse.\n>\n> Is there any way to get rid of this JOIN?\n>\n>\nHave you got an index on employments.employer_id? It seems for me that only\nsome employments get out of join, so index would help here. What's the plan\nwith seq_scan off?\n\nP.S. I don't see why all employments are needed. May be I am reading\nsomething wrong? For me it's max 2616 employments out of 1606432.\n\nBest regards, Vitalii Tymchyshyn\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2012/4/16 Tomek Walkuski <[email protected]>\nHello group!\n\nI have query like this:\n\nSELECT\n employments.candidate_id AS candidate_id,\n SUM(TS_RANK(employers.search_vector, TO_TSQUERY('simple', 'One:* |\nTwo:* | Three:* | Four:*'), 2)) AS ts_rank\nFROM\n employments\nINNER JOIN\n employers ON employments.employer_id = employers.id\nAND\n employers.search_vector @@ TO_TSQUERY('simple', 'One:* | Two:* |\nThree:* | Four:*')\nGROUP BY\n candidate_id;\n\nAnd it results with this:\n\nhttp://explain.depesz.com/s/jLM\n\nThe JOIN between employments and employers is the culprit. I'm unable\nto get rid of the seq scan, and setting enable_seqscan to off makes\nthings even worse.\n\nIs there any way to get rid of this JOIN?Have you got an index on employments.employer_id? It seems for me that only some employments get out of join, so index would help here. What's the plan with seq_scan off?\nP.S. I don't see why all employments are needed. May be I am reading something wrong? For me it's max 2616 employments out of 1606432.Best regards, Vitalii Tymchyshyn\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Tue, 17 Apr 2012 20:12:26 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SeqScan with full text search"
}
] |
[
{
"msg_contents": "Hi,\n\nthanks for the suggestion, but it didn't help. We have tried it earlier.\n\n7500ms\nhttp://explain.depesz.com/s/ctn\n\nALTER TABLE product_parent ALTER COLUMN parent_name SET STATISTICS 1000;\nALTER TABLE product ALTER COLUMN parent_id SET STATISTICS 1000;\nANALYZE product_parent;\nANALYZE product;\n\nquery was:\nselect distinct product_code from product p_\ninner join product_parent par_ on p_.parent_id=par_.id\nwhere par_.parent_name like 'aa%' limit 2\n\n\ni've played with the query, and found an interesting behaviour: its\nspeed depends on value of limit:\nselect ... limit 2; => 1500ms\nselect ... limit 20; => 14ms (http://explain.depesz.com/s/4iL)\nselect ... limit 50; => 17ms\n\nThese were with high effective_cache_size (6GB). Somehow it uses good\nplanning in these cases.\nIf it helps i can send the db to repro (53M).\n\nAny tips to try?\nThanks in advance,\nIstvan\n",
"msg_date": "Tue, 17 Apr 2012 11:49:57 +0200",
"msg_from": "Istvan Endredy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
},
{
"msg_contents": "On 4/17/12 2:49 AM, Istvan Endredy wrote:\n> Hi,\n> \n> thanks for the suggestion, but it didn't help. We have tried it earlier.\n> \n> 7500ms\n> http://explain.depesz.com/s/ctn\n\nThis plan seems very odd -- doing individual index lookups on 2.8m rows\nis not standard planner behavior. Can you confirm that all of your\nother query cost parameters are the defaults?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 18 Apr 2012 17:44:42 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
},
{
"msg_contents": "On Thu, Apr 19, 2012 at 3:44 AM, Josh Berkus <[email protected]>\n>> 7500ms\n>> http://explain.depesz.com/s/\n> This plan seems very odd -- doing individual index lookups on 2.8m rows\n> is not standard planner behavior. Can you confirm that all of your\n> other query cost parameters are the defaults?\n\nThis similat to the issue with limit that Simon was complaining about\na few weeks ago [1]. A lot of the estimation in the planner is biased\nto give overestimations for number of rows returned in the face of\nuncertainty. This works well for joins but interacts really badly with\nlimits. The two issues here are the join cardinality being\noverestimated a factor of 15x and then the unique is off by another\n50x. The result is that the planner thinks that it needs to scan 0.25%\nof the input, while actually it needs to scan the whole of it,\nunderestimating the real cost by a factor of 400.\n\nI'm not sure what to do about unique node overestimation, but I think\nit could be coaxed to be less optimistic about the limit by adding an\noptimization barrier and some selectivity decreasing clauses between\nthe limit and the rest of the query:\n\nselect * from (\n select distinct product_code from product p_\n inner join product_parent par_ on p_.parent_id=par_.id\n where par_.parent_name like 'aa%'\n offset 0 -- optimization barrier\n) as x\nwhere product_code = product_code -- reduce selectivity estimate by 200x\nlimit 2;\n\n[1] http://archives.postgresql.org/message-id/CA+U5nMLbXfUT9cWDHJ3tpxjC3bTWqizBKqTwDgzebCB5bAGCgg@mail.gmail.com\n\nCheers,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Thu, 19 Apr 2012 13:32:50 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
},
{
"msg_contents": "Hi everybody,\n\nthanks for the so many responses. :)\n\n\n> On Thu, Apr 19, 2012 at 3:44 AM, Josh Berkus <[email protected]>\n>>> 7500ms\n>>> http://explain.depesz.com/s/\n>> This plan seems very odd -- doing individual index lookups on 2.8m rows\n>> is not standard planner behavior. Can you confirm that all of your\n>> other query cost parameters are the defaults?\n\nJosh: i confirm the non-default values:\ni've ran this query: http://wiki.postgresql.org/wiki/Server_Configuration\nits result:\n\"version\";\"PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by\ngcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\"\n\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\n\"client_min_messages\";\"log\"\n\"effective_cache_size\";\"6GB\"\n\"lc_collate\";\"en_US.UTF-8\"\n\"lc_ctype\";\"en_US.UTF-8\"\n\"listen_addresses\";\"*\"\n\"log_directory\";\"/var/log/postgres\"\n\"log_duration\";\"on\"\n\"log_error_verbosity\";\"default\"\n\"log_filename\";\"postgresql-%Y-%m-%d.log\"\n\"log_line_prefix\";\"%t %u@%h %d %p %i \"\n\"log_lock_waits\";\"on\"\n\"log_min_duration_statement\";\"0\"\n\"log_min_error_statement\";\"warning\"\n\"log_min_messages\";\"warning\"\n\"log_rotation_age\";\"15d\"\n\"log_statement\";\"all\"\n\"logging_collector\";\"on\"\n\"max_connections\";\"100\"\n\"max_stack_depth\";\"2MB\"\n\"port\";\"5432\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"6024MB\"\n\"TimeZone\";\"Europe/Budapest\"\n\"wal_buffers\";\"16MB\"\n\n\nAnts:\n> I'm not sure what to do about unique node overestimation, but I think\n> it could be coaxed to be less optimistic about the limit by adding an\n> optimization barrier and some selectivity decreasing clauses between\n> the limit and the rest of the query:\n>\n> select * from (\n> select distinct product_code from product p_\n> inner join product_parent par_ on p_.parent_id=par_.id\n> where par_.parent_name like 'aa%'\n> offset 0 -- optimization barrier\n> ) as x\n> where product_code = product_code -- reduce selectivity estimate by 200x\n> limit 2;\n\nIts planning: http://explain.depesz.com/s/eF3h\n1700ms\n\nВіталій:\n> How about\n>\n> with par_ as (select * from product_parent where parent_name like 'aa%' )\n> select distinct product_code from product p_\n> inner join par_ on p_.parent_id=par_.id\n> limit 2\n\nIts planning: http://explain.depesz.com/s/YIS\n\nAll suggestions are welcome,\nIstvan\n",
"msg_date": "Thu, 19 Apr 2012 16:15:09 +0200",
"msg_from": "Istvan Endredy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad planning with 75% effective_cache_size"
}
] |
[
{
"msg_contents": "hi all,\n\ni ran vmstat during the test :\n\n[yb@centos08 ~]$ vmstat 1 15\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 0 0 0 6131400 160556 1115792 0 0 1 12 22 17 0 0\n100 0 0\n 0 0 0 6131124 160556 1115800 0 0 0 532 540 360 1 0\n99 0 0\n 5 1 0 6127852 160556 1116048 0 0 0 3352 1613 1162 18 1\n80 1 0\n 7 0 0 6122984 160556 1117312 0 0 0 14608 5408 3703 86 7\n 6 1 0\n 8 0 0 6121372 160556 1117968 0 0 0 13424 5434 3741 86 7\n 5 2 0\n 7 1 0 6120504 160556 1118952 0 0 0 13616 5296 3546 86 7\n 5 2 0\n 7 0 0 6119528 160572 1119728 0 0 0 13836 5494 3597 86 7\n 4 2 0\n 6 1 0 6118744 160572 1120408 0 0 0 15296 5552 3869 89 8\n 3 1 0\n 2 0 0 6118620 160572 1120288 0 0 0 13792 4548 3054 63 6\n25 6 0\n 0 0 0 6118620 160572 1120392 0 0 0 3552 1090 716 8 1\n88 3 0\n 0 0 0 6118736 160572 1120392 0 0 0 1136 787 498 1 0\n98 1 0\n 0 0 0 6118868 160580 1120400 0 0 0 28 348 324 1 0\n99 0 0\n 0 0 0 6118992 160580 1120440 0 0 0 380 405 347 1 0\n99 1 0\n 0 0 0 6118868 160580 1120440 0 0 0 1544 468 320 1 0\n100 0 0\n 0 0 0 6118720 160580 1120440 0 0 0 0 382 335 0 0\n99 0 0\n\n\nthe temp-tables normally don't populate more then 10 rows. they are being\ncreated in advanced. we don't drop them, we use ON COMMIT DELETE ROWS. i\nbelieve temp-tables are in the RAM, so no disk-i/o, right? and also: no\nwriting to the system catalogs, right?\n\nabout returning multiple refcursors, we checked this issue in the past, and\nwe concluded that returning many small refcursors (all have the same\nstructure), is faster than returning 1 big refcursor. dose it sound wired\n(maybe it worth more tests)? that's why we took that path.\n\nabout having multiple procedures each returning one resultset: it's too\nmuch code rewrite at the web-server's code.\n\nthe disk system is a built-in intel fake-raid, configured as raid0. i do a\ndual-boot, so both windows and centos are on the same hardware.\n\nThanks again for any more help.\n\nhi all,i ran vmstat during the test :[yb@centos08 ~]$ vmstat 1 15procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 6131400 160556 1115792 0 0 1 12 22 17 0 0 100 0 0 \n 0 0 0 6131124 160556 1115800 0 0 0 532 540 360 1 0 99 0 0 5 1 0 6127852 160556 1116048 0 0 0 3352 1613 1162 18 1 80 1 0 \n 7 0 0 6122984 160556 1117312 0 0 0 14608 5408 3703 86 7 6 1 0 8 0 0 6121372 160556 1117968 0 0 0 13424 5434 3741 86 7 5 2 0 \n 7 1 0 6120504 160556 1118952 0 0 0 13616 5296 3546 86 7 5 2 0 7 0 0 6119528 160572 1119728 0 0 0 13836 5494 3597 86 7 4 2 0 \n 6 1 0 6118744 160572 1120408 0 0 0 15296 5552 3869 89 8 3 1 0 2 0 0 6118620 160572 1120288 0 0 0 13792 4548 3054 63 6 25 6 0 \n 0 0 0 6118620 160572 1120392 0 0 0 3552 1090 716 8 1 88 3 0 0 0 0 6118736 160572 1120392 0 0 0 1136 787 498 1 0 98 1 0 \n 0 0 0 6118868 160580 1120400 0 0 0 28 348 324 1 0 99 0 0 0 0 0 6118992 160580 1120440 0 0 0 380 405 347 1 0 99 1 0 \n 0 0 0 6118868 160580 1120440 0 0 0 1544 468 320 1 0 100 0 0 0 0 0 6118720 160580 1120440 0 0 0 0 382 335 0 0 99 0 0 \nthe temp-tables normally don't populate more then 10 rows. they are being created in advanced. we don't drop them, we use ON COMMIT DELETE ROWS. i believe temp-tables are in the RAM, so no disk-i/o, right? and also: no writing to the system catalogs, right?\nabout returning multiple refcursors, we checked this issue in the past, and we concluded that returning many small refcursors (all have the same structure), is faster than returning 1 big refcursor. dose it sound wired (maybe it worth more tests)? that's why we took that path. \nabout having multiple procedures each returning one resultset: it's too much code rewrite at the web-server's code.the disk system is a built-in intel fake-raid, configured as raid0. i do a dual-boot, so both windows and centos are on the same hardware.\nThanks again for any more help.",
"msg_date": "Wed, 18 Apr 2012 10:32:07 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On 4/18/2012 2:32 AM, Eyal Wilde wrote:\n> hi all,\n>\n> i ran vmstat during the test :\n>\n> [yb@centos08 ~]$ vmstat 1 15\n> procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy id wa st\n> 2 0 0 6118620 160572 1120288 0 0 0 13792 4548 3054 63 \n 6 25 6 0\n> the temp-tables normally don't populate more then 10 rows. they are\n> being created in advanced. we don't drop them, we use ON COMMIT DELETE\n> ROWS. i believe temp-tables are in the RAM, so no disk-i/o, right? and\n> also: no writing to the system catalogs, right?\n\nTemp tables are not 100% ram, they might spill to disk. The vmstat shows \nthere is disk io. The BO column (blocks out) shows you are writing to \ndisk. And you have wait time (which means one or more of the cpus is \nstopped waiting for disk).\n\nI don't know if the disk io is because of the temp tables (I've never \nused them myself), or something else (stats, vacuum, logs, other sql, etc).\n\nI'd bet, though, that a derived table would be faster than \"create temp \ntable...; insert into temp .... ; select .. from temp;\"\n\nOf course it may not be that much faster... and it might require a lot \nof code change. Might be worth a quick benchmark though.\n\n>\n> about returning multiple refcursors, we checked this issue in the past,\n> and we concluded that returning many small refcursors (all have the same\n> structure), is faster than returning 1 big refcursor. dose it sound\n> wired (maybe it worth more tests)? that's why we took that path.\n>\n\nNo, if you tried it out, I'd stick with what you have. I've never used \nthem myself, so I was just wondering aloud.\n\n-Andy\n",
"msg_date": "Wed, 18 Apr 2012 14:47:49 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Wed, Apr 18, 2012 at 2:32 AM, Eyal Wilde <[email protected]> wrote:\n> hi all,\n>\n> i ran vmstat during the test :\n>\n> [yb@centos08 ~]$ vmstat 1 15\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 0 0 0 6131400 160556 1115792 0 0 1 12 22 17 0 0\n> 100 0 0\n> 0 0 0 6131124 160556 1115800 0 0 0 532 540 360 1 0 99\n> 0 0\n> 5 1 0 6127852 160556 1116048 0 0 0 3352 1613 1162 18 1 80\n> 1 0\n> 7 0 0 6122984 160556 1117312 0 0 0 14608 5408 3703 86 7 6\n> 1 0\n> 8 0 0 6121372 160556 1117968 0 0 0 13424 5434 3741 86 7 5\n> 2 0\n> 7 1 0 6120504 160556 1118952 0 0 0 13616 5296 3546 86 7 5\n> 2 0\n> 7 0 0 6119528 160572 1119728 0 0 0 13836 5494 3597 86 7 4\n> 2 0\n> 6 1 0 6118744 160572 1120408 0 0 0 15296 5552 3869 89 8 3\n> 1 0\n> 2 0 0 6118620 160572 1120288 0 0 0 13792 4548 3054 63 6 25\n> 6 0\n> 0 0 0 6118620 160572 1120392 0 0 0 3552 1090 716 8 1 88\n> 3 0\n> 0 0 0 6118736 160572 1120392 0 0 0 1136 787 498 1 0 98\n> 1 0\n> 0 0 0 6118868 160580 1120400 0 0 0 28 348 324 1 0 99\n> 0 0\n> 0 0 0 6118992 160580 1120440 0 0 0 380 405 347 1 0 99\n> 1 0\n> 0 0 0 6118868 160580 1120440 0 0 0 1544 468 320 1 0\n> 100 0 0\n> 0 0 0 6118720 160580 1120440 0 0 0 0 382 335 0 0 99\n> 0 0\n>\n>\n> the temp-tables normally don't populate more then 10 rows. they are being\n> created in advanced. we don't drop them, we use ON COMMIT DELETE ROWS. i\n> believe temp-tables are in the RAM, so no disk-i/o, right? and also: no\n> writing to the system catalogs, right?\n>\n> about returning multiple refcursors, we checked this issue in the past, and\n> we concluded that returning many small refcursors (all have the same\n> structure), is faster than returning 1 big refcursor. dose it sound wired\n> (maybe it worth more tests)? that's why we took that path.\n\nno chance of seeing the code or a reasonable reproduction?\n\nmerlin\n",
"msg_date": "Wed, 18 Apr 2012 15:13:47 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
}
] |
[
{
"msg_contents": "I'm a n00b [1] to tuning DBs so if anyone has a bit of time to provide\nfeedback, I'd sure appreciate any input the community might have on the\nplan, configuration, etc. I could very well have unintentionally left-off\ncrucial parts of my descriptions below and for that I apologize for wasting\ntime - let me know what I missed and I'll do my best to dig it up.\n\nWe are planning to rebuild our production 50GB PG 9.0 database serving our\napplication platform on the new hardware below. The web-applications are\n80/20 read/write and the data gateways are even mix 50/50 read/write; one\nof the gateways nightly exports & imports ~20% of our data. All\napplications use a single DB but the applications themselves run on 6\ndifferent machines.\n\nThe new hardware for the 50GB PG 9.0 machine is:\n * 24 cores across 2 sockets\n * 64 GB RAM\n * 10 x 15k SAS drives on SAN\n * 1 x 15k SAS drive local\n * CentOS 6.2 (2.6.32 kernel)\n\nWe are considering the following drive allocations:\n\n * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n * 1 x 15k SAS drive, XFS, on local storage for OS\n\nOS:\n PAGE_SIZE = 4096\n _PHYS_PAGES = 12,352,666 (commas added for clarity)\n kernel.shmall = 4,294,967,296 (commas added for clarity)\n kernel.shmax = 68,719,476,736 (commas added for clarity)\n kernel.sem = 250 32000 32 128\n vm.swappiness = 0\n dirty_ratio = 10\n dirty_background_ratio = 5\n\nTo validate the configuration, I plan to use memtest86+, dd, bonnie++, and\nbonnie++ ZCAV.\n\nIf there are \"obviously correct\" choices in PG configuration, this would be\ntremendously helpful information to me. I'm planning on using pgbench to\ntest the configuration options.\n\nThoughts?\n\n\nCheers,\n\nJan\n\n[1] I'm applying what I learned from PostgreSQL 9.0 High Performance by\nGregory Smith, along with numerous web sites and list postings.\n\nI'm a n00b [1] to tuning DBs so if anyone has a bit of time to provide feedback, I'd sure appreciate any input the community might have on the plan, configuration, etc. I could very well have unintentionally left-off crucial parts of my descriptions below and for that I apologize for wasting time - let me know what I missed and I'll do my best to dig it up.\nWe are planning to rebuild our production 50GB PG 9.0 database serving our application platform on the new hardware below. The web-applications are 80/20 read/write and the data gateways are even mix 50/50 read/write; one of the gateways nightly exports & imports ~20% of our data. All applications use a single DB but the applications themselves run on 6 different machines.\nThe new hardware for the 50GB PG 9.0 machine is: * 24 cores across 2 sockets * 64 GB RAM * 10 x 15k SAS drives on SAN * 1 x 15k SAS drive local * CentOS 6.2 (2.6.32 kernel)\nWe are considering the following drive allocations: * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog * 1 x 15k SAS drive, XFS, on local storage for OSOS: PAGE_SIZE = 4096 _PHYS_PAGES = 12,352,666 (commas added for clarity)\n kernel.shmall = 4,294,967,296 (commas added for clarity) kernel.shmax = 68,719,476,736 (commas added for clarity) kernel.sem = 250 32000 32 128 vm.swappiness = 0 dirty_ratio = 10\n dirty_background_ratio = 5To validate the configuration, I plan to use memtest86+, dd, bonnie++, and bonnie++ ZCAV.If there are \"obviously correct\" choices in PG configuration, this would be tremendously helpful information to me. I'm planning on using pgbench to test the configuration options.\nThoughts?Cheers,Jan[1] I'm applying what I learned from PostgreSQL 9.0 High Performance by Gregory Smith, along with numerous web sites and list postings.",
"msg_date": "Mon, 23 Apr 2012 20:56:47 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuration Recommendations"
},
{
"msg_contents": "On Tue, Apr 24, 2012 at 4:56 AM, Jan Nielsen\n<[email protected]> wrote:\n> We are considering the following drive allocations:\n>\n> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n> * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n> * 1 x 15k SAS drive, XFS, on local storage for OS\n\nIs it established practice in the Postgres world to separate indexes\nfrom tables? I would assume that the reasoning of Richard Foote -\nalbeit for Oracle databases - is also true for Postgres:\n\nhttp://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\nhttp://richardfoote.wordpress.com/2008/04/18/separate-indexes-from-tables-some-thoughts-part-ii-there-there/\nhttp://richardfoote.wordpress.com/2008/04/28/indexes-in-their-own-tablespace-availabilty-advantages-is-there-anybody-out-there/\n\nConversely if you lump both on a single volume you have more\nflexibility with regard to usage - unless of course you can\ndynamically resize volumes.\n\nTo me it also seems like a good idea to mirror local disk with OS and\ndatabase software because if that fails you'll get downtime as well.\nAs of now you have a single point of failure there.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 24 Apr 2012 07:53:30 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 04/23/2012 09:56 PM, Jan Nielsen wrote:\n\n> The new hardware for the 50GB PG 9.0 machine is:\n> * 24 cores across 2 sockets\n> * 64 GB RAM\n> * 10 x 15k SAS drives on SAN\n> * 1 x 15k SAS drive local\n> * CentOS 6.2 (2.6.32 kernel)\n\nThis is a pretty good build. Nice and middle-of-the-road for current \nhardware. I think it's probably relevant what your \"24 cores across 2 \nsockets\" are, though. Then again, based on the 24-cores, I have to \nassume you've got hex-core Xeons of some sort, with hyperthreading. That \nsuggests a higher end Sandy Bridge Xeon, like the X5645 or higher. If \nthat's the case, you're in good hands.\n\nAs a note, though... make sure you enable Turbo and other performance \nsettings (disable power-down of unused CPUs, etc) in the BIOS when \nsetting this up. We found that the defaults for the CPUs did not allow \nprocessor scaling, and it was far too aggressive in cycling down cores, \nsuch that cycling them back up had a non-zero cost. We saw roughly a 20% \nimprovement by forcing the CPUs into full online performance mode.\n\n> We are considering the following drive allocations:\n>\n> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n> * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n> * 1 x 15k SAS drive, XFS, on local storage for OS\n\nPlease don't do this. If you have the system you just described, give \nyourself an 8x RAID10, and the 2x RAID1. I've found that your indexes \nwill generally be about 1/3 to 1/2 the total sixe of your database. So, \nnot only does your data partition lose read spindles, but you've wasted \n1/2 to 2/3s of your active drive space. This may not be a concern based \non your data growth curves, but it could be.\n\nIn addition, add another OS drive and put it into a RAID-1. If you have \nserver-class hardware, you'll want that extra drive. I'm frankly \nsurprised you were even able to acquire a dual Xeon class server without \na RAID-1 for OS data by default.\n\nI'm not sure if you've done metrics or not, but XFS performance is \nhighly dependent on your init and mount options. I can give you some \nguidelines there, but one of the major changes is that the Linux 3.X \nkernels have some impressive performance improvements you won't see \nusing CentOS 6.2. Metadata in particular has undergone a massive upgrade \nthat drastically enhances its parallel scalability on metadata \nmodifications.\n\nIf possible, you might consider the new Ubuntu 12.04 LTS that's coming \nout soon. It should have the newer XFS performance. If not, consider \ninjecting a newer kernel to the CentOS 6.2 install. And again, testing \nis the only way to know for sure.\n\nAnd test with pgbench, if possible. I used this to get our XFS init and \nmount options, along with other OS/kernel settings. You can have very \ndifferent performance metrics from dd/bonnie than an actual use pattern \nfrom real DB usage. As a hint, before you run any of these tests, both \nwrite a '3' to /proc/sys/vm/drop_caches, and restart your PG instance. \nYou want to test your drives, not your memory. :)\n\n> kernel.shmall = 4,294,967,296 (commas added for clarity)\n> kernel.shmax = 68,719,476,736 (commas added for clarity)\n> kernel.sem = 250 32000 32 128\n> vm.swappiness = 0\n> dirty_ratio = 10\n> dirty_background_ratio = 5\n\nGood. Though you might consider lowering dirty_background_ratio. At that \nsetting, it won't even try to write out data until you have about 3GB of \ndirty pages. Even high-end disk controllers only have 1GB of local \ncapacitor-backed cache. If you really do have a good SAN, it probably \nhas more than that, but try to induce a high-turnover database test to \nsee what happens during heavy IO. Like, a heavy long-running PG-bench \nshould invoke several checkpoints and also flood the local write cache. \nWhen that happens, monitor /proc/meminfo. Like this:\n\ngrep -A1 Dirty /proc/meminfo\n\nThat will tell you how much of your memory is dirty, but the 'Writeback' \nentry is what you care about. If you see that as a non-zero value for \nmore than one consecutive check, you've saturated your write bandwidth \nto the point performance will suffer. But the only way you can really \nknow any of this is with testing. Some SANs scale incredibly well to \nlarge pool flushes, and others don't.\n\nAlso, make iostat your friend. Particularly with the -x option. During \nyour testing, keep one of these running in the background for the \ndevices on your SAN. Watch your %util column in particular. Graph it, if \nyou can. You can almost build a complete performance profile for \ndifferent workloads before you put a single byte of real data on this \nhardware.\n\n> If there are \"obviously correct\" choices in PG configuration, this would\n> be tremendously helpful information to me. I'm planning on using pgbench\n> to test the configuration options.\n\nYou sound like you've read up on this quite a bit. Greg's book is a very \ngood thing to have and learn from. It'll cover all the basics about the \npostgresql.conf file. I don't see how I could add much to that, so just \npay attention to what he says. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Tue, 24 Apr 2012 14:32:49 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Tue, Apr 24, 2012 at 1:32 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 04/23/2012 09:56 PM, Jan Nielsen wrote:\n>\n> The new hardware for the 50GB PG 9.0 machine is:\n>> * 24 cores across 2 sockets\n>> * 64 GB RAM\n>> * 10 x 15k SAS drives on SAN\n>> * 1 x 15k SAS drive local\n>> * CentOS 6.2 (2.6.32 kernel)\n>>\n>\n> This is a pretty good build. Nice and middle-of-the-road for current\n> hardware. I think it's probably relevant what your \"24 cores across 2\n> sockets\" are, though. Then again, based on the 24-cores, I have to assume\n> you've got hex-core Xeons of some sort, with hyperthreading. That suggests\n> a higher end Sandy Bridge Xeon, like the X5645 or higher. If that's the\n> case, you're in good hands.\n>\n\nThe processors are Intel(R) Xeon(R) CPU X5650 @ 2.67GHz.\n\n\n> As a note, though... make sure you enable Turbo and other performance\n> settings (disable power-down of unused CPUs, etc) in the BIOS when setting\n> this up. We found that the defaults for the CPUs did not allow processor\n> scaling, and it was far too aggressive in cycling down cores, such that\n> cycling them back up had a non-zero cost. We saw roughly a 20% improvement\n> by forcing the CPUs into full online performance mode.\n\n\nIs there a way to tell what the BIOS power-down settings are for the cores\nfrom the CLI?\n\n\n> We are considering the following drive allocations:\n>\n>>\n>> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n>> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n>> * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n>> * 1 x 15k SAS drive, XFS, on local storage for OS\n>>\n>\n> Please don't do this. If you have the system you just described, give\n> yourself an 8x RAID10, and the 2x RAID1. I've found that your indexes will\n> generally be about 1/3 to 1/2 the total sixe of your database. So, not only\n> does your data partition lose read spindles, but you've wasted 1/2 to 2/3s\n> of your active drive space. This may not be a concern based on your data\n> growth curves, but it could be.\n>\n\nAfter reading Richard Foote's articles that Robert Klemme referenced in the\nprevious post, I'm convinced.\n\n\n> In addition, add another OS drive and put it into a RAID-1. If you have\n> server-class hardware, you'll want that extra drive. I'm frankly surprised\n> you were even able to acquire a dual Xeon class server without a RAID-1 for\n> OS data by default.\n>\n\nAgreed.\n\n\n> I'm not sure if you've done metrics or not, but XFS performance is highly\n> dependent on your init and mount options. I can give you some guidelines\n> there, but one of the major changes is that the Linux 3.X kernels have some\n> impressive performance improvements you won't see using CentOS 6.2.\n> Metadata in particular has undergone a massive upgrade that drastically\n> enhances its parallel scalability on metadata modifications.\n>\n\nAlas, a 3.x Linux kernel would be nice but I'm stuck with CentOS 6.2 on\n2.6.32. I would very much appreciate any guidelines you can provide.\n\n\n> If possible, you might consider the new Ubuntu 12.04 LTS that's coming out\n> soon. It should have the newer XFS performance. If not, consider injecting\n> a newer kernel to the CentOS 6.2 install. And again, testing is the only\n> way to know for sure.\n>\n> And test with pgbench, if possible. I used this to get our XFS init and\n> mount options, along with other OS/kernel settings.\n\n\nYes; that does seem important. I found this:\n\n\nhttp://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD\n\nwhich and while I was planning to set 'noatime', I'm a bit stumped on most\nof the rest. Anyone with comparable hardware willing to share their\nsettings as a starting point for my testing?\n\n\n> You can have very different performance metrics from dd/bonnie than an\n> actual use pattern from real DB usage. As a hint, before you run any of\n> these tests, both write a '3' to /proc/sys/vm/drop_caches, and restart your\n> PG instance. You want to test your drives, not your memory. :)\n>\n>\n> kernel.shmall = 4,294,967,296 (commas added for clarity)\n>> kernel.shmax = 68,719,476,736 (commas added for clarity)\n>> kernel.sem = 250 32000 32 128\n>> vm.swappiness = 0\n>> dirty_ratio = 10\n>> dirty_background_ratio = 5\n>>\n>\n> Good. Though you might consider lowering dirty_background_ratio. At that\n> setting, it won't even try to write out data until you have about 3GB of\n> dirty pages. Even high-end disk controllers only have 1GB of local\n> capacitor-backed cache. If you really do have a good SAN, it probably has\n> more than that, but try to induce a high-turnover database test to see what\n> happens during heavy IO. Like, a heavy long-running PG-bench should invoke\n> several checkpoints and also flood the local write cache. When that\n> happens, monitor /proc/meminfo. Like this:\n>\n> grep -A1 Dirty /proc/meminfo\n>\n> That will tell you how much of your memory is dirty, but the 'Writeback'\n> entry is what you care about. If you see that as a non-zero value for more\n> than one consecutive check, you've saturated your write bandwidth to the\n> point performance will suffer. But the only way you can really know any of\n> this is with testing. Some SANs scale incredibly well to large pool\n> flushes, and others don't.\n>\n> Also, make iostat your friend. Particularly with the -x option. During\n> your testing, keep one of these running in the background for the devices\n> on your SAN. Watch your %util column in particular. Graph it, if you can.\n> You can almost build a complete performance profile for different workloads\n> before you put a single byte of real data on this hardware.\n>\n>\n> If there are \"obviously correct\" choices in PG configuration, this would\n>> be tremendously helpful information to me. I'm planning on using pgbench\n>> to test the configuration options.\n>>\n>\n> You sound like you've read up on this quite a bit. Greg's book is a very\n> good thing to have and learn from. It'll cover all the basics about the\n> postgresql.conf file. I don't see how I could add much to that, so just pay\n> attention to what he says. :)\n>\n\nI'm doing my best but the numbers will tell the story. :-)\n\nThanks for your review and feedback, Shaun.\n\n\nCheers,\n\nJan\n\n\n\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for terms and conditions related to this email\n>\n\nOn Tue, Apr 24, 2012 at 1:32 PM, Shaun Thomas <[email protected]> wrote:\nOn 04/23/2012 09:56 PM, Jan Nielsen wrote:\n\n\nThe new hardware for the 50GB PG 9.0 machine is:\n* 24 cores across 2 sockets\n* 64 GB RAM\n* 10 x 15k SAS drives on SAN\n* 1 x 15k SAS drive local\n* CentOS 6.2 (2.6.32 kernel)\n\n\nThis is a pretty good build. Nice and middle-of-the-road for current hardware. I think it's probably relevant what your \"24 cores across 2 sockets\" are, though. Then again, based on the 24-cores, I have to assume you've got hex-core Xeons of some sort, with hyperthreading. That suggests a higher end Sandy Bridge Xeon, like the X5645 or higher. If that's the case, you're in good hands.\nThe processors are Intel(R) Xeon(R) CPU X5650 @ 2.67GHz. \n\nAs a note, though... make sure you enable Turbo and other performance settings (disable power-down of unused CPUs, etc) in the BIOS when setting this up. We found that the defaults for the CPUs did not allow processor scaling, and it was far too aggressive in cycling down cores, such that cycling them back up had a non-zero cost. We saw roughly a 20% improvement by forcing the CPUs into full online performance mode.\nIs there a way to tell what the BIOS power-down settings are for the cores from the CLI? \nWe are considering the following drive allocations:\n\n* 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n* 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n* 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n* 1 x 15k SAS drive, XFS, on local storage for OS\n\n\nPlease don't do this. If you have the system you just described, give yourself an 8x RAID10, and the 2x RAID1. I've found that your indexes will generally be about 1/3 to 1/2 the total sixe of your database. So, not only does your data partition lose read spindles, but you've wasted 1/2 to 2/3s of your active drive space. This may not be a concern based on your data growth curves, but it could be.\nAfter reading Richard Foote's articles that Robert Klemme referenced in the previous post, I'm convinced. \n\n\nIn addition, add another OS drive and put it into a RAID-1. If you have server-class hardware, you'll want that extra drive. I'm frankly surprised you were even able to acquire a dual Xeon class server without a RAID-1 for OS data by default.\nAgreed. \n\nI'm not sure if you've done metrics or not, but XFS performance is highly dependent on your init and mount options. I can give you some guidelines there, but one of the major changes is that the Linux 3.X kernels have some impressive performance improvements you won't see using CentOS 6.2. Metadata in particular has undergone a massive upgrade that drastically enhances its parallel scalability on metadata modifications.\nAlas, a 3.x Linux kernel would be nice but I'm stuck with CentOS 6.2 on 2.6.32. I would very much appreciate any guidelines you can provide. \n\n\nIf possible, you might consider the new Ubuntu 12.04 LTS that's coming out soon. It should have the newer XFS performance. If not, consider injecting a newer kernel to the CentOS 6.2 install. And again, testing is the only way to know for sure.\n\nAnd test with pgbench, if possible. I used this to get our XFS init and mount options, along with other OS/kernel settings. Yes; that does seem important. I found this: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD\nwhich and while I was planning to set 'noatime', I'm a bit stumped on most of the rest. Anyone with comparable hardware willing to share their settings as a starting point for my testing? \nYou can have very different performance metrics from dd/bonnie than an actual use pattern from real DB usage. As a hint, before you run any of these tests, both write a '3' to /proc/sys/vm/drop_caches, and restart your PG instance. You want to test your drives, not your memory. :)\n\n\n\nkernel.shmall = 4,294,967,296 (commas added for clarity)\nkernel.shmax = 68,719,476,736 (commas added for clarity)\nkernel.sem = 250 32000 32 128\nvm.swappiness = 0\ndirty_ratio = 10\ndirty_background_ratio = 5\n\n\nGood. Though you might consider lowering dirty_background_ratio. At that setting, it won't even try to write out data until you have about 3GB of dirty pages. Even high-end disk controllers only have 1GB of local capacitor-backed cache. If you really do have a good SAN, it probably has more than that, but try to induce a high-turnover database test to see what happens during heavy IO. Like, a heavy long-running PG-bench should invoke several checkpoints and also flood the local write cache. When that happens, monitor /proc/meminfo. Like this:\n\ngrep -A1 Dirty /proc/meminfo\n\nThat will tell you how much of your memory is dirty, but the 'Writeback' entry is what you care about. If you see that as a non-zero value for more than one consecutive check, you've saturated your write bandwidth to the point performance will suffer. But the only way you can really know any of this is with testing. Some SANs scale incredibly well to large pool flushes, and others don't.\n\nAlso, make iostat your friend. Particularly with the -x option. During your testing, keep one of these running in the background for the devices on your SAN. Watch your %util column in particular. Graph it, if you can. You can almost build a complete performance profile for different workloads before you put a single byte of real data on this hardware.\n\n\n\nIf there are \"obviously correct\" choices in PG configuration, this would\nbe tremendously helpful information to me. I'm planning on using pgbench\nto test the configuration options.\n\n\nYou sound like you've read up on this quite a bit. Greg's book is a very good thing to have and learn from. It'll cover all the basics about the postgresql.conf file. I don't see how I could add much to that, so just pay attention to what he says. :)\nI'm doing my best but the numbers will tell the story. :-)Thanks for your review and feedback, Shaun.Cheers,Jan \n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Tue, 24 Apr 2012 23:07:58 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Oopps; looping in the list...\n\nOn Tue, Apr 24, 2012 at 8:57 PM, Jan Nielsen <[email protected]>wrote:\n\n> On Mon, Apr 23, 2012 at 11:53 PM, Robert Klemme <\n> [email protected]> wrote:\n>\n>> On Tue, Apr 24, 2012 at 4:56 AM, Jan Nielsen\n>> <[email protected]> wrote:\n>> > We are considering the following drive allocations:\n>> >\n>> > * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n>> > * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n>> > * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n>> > * 1 x 15k SAS drive, XFS, on local storage for OS\n>>\n>> Is it established practice in the Postgres world to separate indexes\n>> from tables? I would assume that the reasoning of Richard Foote -\n>> albeit for Oracle databases - is also true for Postgres:\n>\n>\n>>\n>> http://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\n>>\n>> http://richardfoote.wordpress.com/2008/04/18/separate-indexes-from-tables-some-thoughts-part-ii-there-there/\n>>\n>> http://richardfoote.wordpress.com/2008/04/28/indexes-in-their-own-tablespace-availabilty-advantages-is-there-anybody-out-there/\n>\n>\n> Very nice articles!\n>\n>\n>> Conversely if you lump both on a single volume you have more\n>> flexibility with regard to usage - unless of course you can\n>> dynamically resize volumes.\n>>\n>\n> Agreed.\n>\n>\n>> To me it also seems like a good idea to mirror local disk with OS and\n>> database software because if that fails you'll get downtime as well.\n>> As of now you have a single point of failure there.\n>>\n>\n> Agreed as well.\n>\n> These are good improvements - thanks for the review and references, Robert.\n>\n>\n> Cheers,\n>\n> Jan\n>\n>\n>\n>>\n>> Kind regards\n>>\n>> robert\n>>\n>> --\n>> remember.guy do |as, often| as.you_can - without end\n>> http://blog.rubybestpractices.com/\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nOopps; looping in the list...On Tue, Apr 24, 2012 at 8:57 PM, Jan Nielsen <[email protected]> wrote:\nOn Mon, Apr 23, 2012 at 11:53 PM, Robert Klemme <[email protected]> wrote:\nOn Tue, Apr 24, 2012 at 4:56 AM, Jan Nielsen\n<[email protected]> wrote:\n> We are considering the following drive allocations:\n>\n> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG data\n> * 4 x 15k SAS drives, XFS, RAID 10 on SAN for PG indexes\n> * 2 x 15k SAS drives, XFS, RAID 1 on SAN for PG xlog\n> * 1 x 15k SAS drive, XFS, on local storage for OS\n\nIs it established practice in the Postgres world to separate indexes\nfrom tables? I would assume that the reasoning of Richard Foote -\nalbeit for Oracle databases - is also true for Postgres:\n\nhttp://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\nhttp://richardfoote.wordpress.com/2008/04/18/separate-indexes-from-tables-some-thoughts-part-ii-there-there/\nhttp://richardfoote.wordpress.com/2008/04/28/indexes-in-their-own-tablespace-availabilty-advantages-is-there-anybody-out-there/\nVery nice articles! Conversely if you lump both on a single volume you have more\n\n\nflexibility with regard to usage - unless of course you can\ndynamically resize volumes.Agreed. To me it also seems like a good idea to mirror local disk with OS and\n\n\ndatabase software because if that fails you'll get downtime as well.\nAs of now you have a single point of failure there.Agreed as well. These are good improvements - thanks for the review and references, Robert.\n\nCheers,Jan \n\nKind regards\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 24 Apr 2012 23:09:19 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 24/04/2012 20:32, Shaun Thomas wrote:\n>\n> I'm not sure if you've done metrics or not, but XFS performance is \n> highly dependent on your init and mount options. I can give you some \n> guidelines there, but one of the major changes is that the Linux 3.X \n> kernels have some impressive performance improvements you won't see \n> using CentOS 6.2. Metadata in particular has undergone a massive \n> upgrade that drastically enhances its parallel scalability on metadata \n> modifications.\nHi, I'd be grateful if you could share any XFS performance tweaks as I'm \nnot entirely sure I'm getting the most out of my setup and any \nadditional guidance would be very helpful.\n\nThanks\n\nJohn\n\n-- \nwww.pricegoblin.co.uk\n\n",
"msg_date": "Wed, 25 Apr 2012 08:46:03 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n> Is it established practice in the Postgres world to separate indexes\n> from tables? I would assume that the reasoning of Richard Foote -\n> albeit for Oracle databases - is also true for Postgres:\n\nYes, it's an established practice. I'd call it something just short of \na best practice though, as it really depends on your situation. I'd \ntake those articles with a grain of salt, as they are very \nOracle-specific (e.g. we do not have fat indexes (yet!), nor segments). \nI also find his examples a bit contrived, and the whole \"multi-user\" \nargument irrelevant for common cases. I lean towards using separate \ntablespaces in Postgres, as the performance outweighs the additional \ncomplexity. It's down on the tuning list however: much more important \nis getting your kernel/volumes configured correctly, allocating \nshared_buffers sanely, separating pg_xlog, etc.\n\n- -- \nGreg Sabino Mullane [email protected]\nEnd Point Corporation http://www.endpoint.com/\nPGP Key: 0x14964AC8 201204251304\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAk+YL08ACgkQvJuQZxSWSsjR0wCfRF0fXpn7C7i5bZ6btDCT3+uX\nDU4AoIN3oSwPR+10F1N3jupCj5Dthjfh\n=EYGQ\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Wed, 25 Apr 2012 17:08:11 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Wed, Apr 25, 2012 at 7:08 PM, Greg Sabino Mullane <[email protected]> wrote:\n\n>> Is it established practice in the Postgres world to separate indexes\n>> from tables? I would assume that the reasoning of Richard Foote -\n>> albeit for Oracle databases - is also true for Postgres:\n>\n> Yes, it's an established practice. I'd call it something just short of\n> a best practice though, as it really depends on your situation.\n\nWhat are the benefits?\n\n> I'd\n> take those articles with a grain of salt, as they are very\n> Oracle-specific (e.g. we do not have fat indexes (yet!), nor segments).\n\nTrue. As far as I understand disk layout segments in Oracle serve the\npurpose to cluster data for a database object. With that feature\nmissing the situation would be worse in Postgres - unless you manually\ndo something similar by using tablespaces for that.\n\n> I also find his examples a bit contrived, and the whole \"multi-user\"\n> argument irrelevant for common cases.\n\nWhy is that?\n\n> I lean towards using separate\n> tablespaces in Postgres, as the performance outweighs the additional\n> complexity.\n\nWhat about his argument with regards to access patterns (i.e.\ninterleaving index and table access during an index scan)? Also,\nShaun's advice to have more spindles available sounds convincing to\nme, too.\n\n> It's down on the tuning list however: much more important\n> is getting your kernel/volumes configured correctly, allocating\n> shared_buffers sanely, separating pg_xlog, etc.\n\nThat does make a lot of sense. Separating pg_xlog would probably the\nfirst thing I'd do especially since the IO pattern is so dramatically\ndifferent from tablespace IO access patterns.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 25 Apr 2012 20:55:09 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 04/25/2012 02:46 AM, John Lister wrote:\n\n> Hi, I'd be grateful if you could share any XFS performance tweaks as I'm\n> not entirely sure I'm getting the most out of my setup and any\n> additional guidance would be very helpful.\n\nOk, I'll give this with a huge caveat: these settings came from lots of \ntesting, both load and pgbench based. I'll explain as much as I can.\n\nFor initializing the XFS filesystem, you can take advantage of a few \nsettings that are pretty handy.\n\n* -d agcount=256 - Higher amount of allocation groups works better with \nmulti-CPU systems. We used 256, but you'll want to do tests to confirm \nthis. The point is that you can have several threads writing to the \nfilesystem simultaneously.\n\n* -l lazy-count=1 - Log data is written more efficiently. Gives a \nmeasurable performance boost. Newer versions set this, but CentOS 5 has \nthe default to 0. I'm not sure about CentOS 6. Just enable it. :)\n\n* -l version=2 - Forces the most recent version of the logging \nalgorithm; allows a larger log buffer on mount. Since you're using \nCentOS, the default value is still probably 1, which you don't want.\n\nAnd then there are the mount options. These actually seemed to make more \nof an impact in our testing:\n\n* allocsize=256m - Database files are up to 1GB in size. To prevent \nfragmentation, always pre-allocate in 256MB chunks. In recent 3.0+ \nkernels, this setting will result in phantom storage allocation as each \nfile is initially allocated with 256MB until all references have exited \nmemory. Due to aggressive Linux inode cache behavior, this may not \nhappen for several hours. On 3.0 kernels, this setting should be \nremoved. I think the 2.6.35 kernel had this backported, so *TEST THIS \nSETTING BEFORE USING IT!*\n\n* logbufs=8 - Forces more of the log buffer to remain in RAM, improving \nfile deletion performance. Good for temporary files. XFS often gets \nknocked for file deletion performance, and this brings it way up. Not \nreally an issue with PG usage, but handy anyway. See logbsize.\n\n* logbsize=256k - Larger log buffers keep track of modified files in \nmemory for better performance. See logbufs.\n\n* noatime - Negates touching the disk for file accesses. Reduces disk IO.\n\n* attr2 - Opportunistic improvement in the way inline extended \nattributes are stored on-disk. Not strictly necessary, but handy.\n\n\nI'm hoping someone else will pipe in, because these settings are pretty \n\"old\" and based on a CentOS 5.5 setup. I haven't done any metrics on the \nnewer kernels, but I have followed enough to know allocsize is dangerous \non new systems.\n\nYour mileage may vary. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 25 Apr 2012 16:29:16 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 04/23/2012 10:56 PM, Jan Nielsen wrote:\n> We are planning to rebuild our production 50GB PG 9.0 database serving\n> our application platform on the new hardware below. The web-applications\n> are 80/20 read/write and the data gateways are even mix 50/50\n> read/write; one of the gateways nightly exports & imports ~20% of our\n> data.\n\nWith enough RAM to hold the database, but that much churn in the nightly \nprocessing, you're most likely to run into VACUUM issues here. The \ntrigger point for autovacuum to kick off is at just around 20%, so you \nmight see problems come and go based on the size of the changed set. \nYou might consider making your own benchmark test out of a change like \nthe gateway introduces. Consider doing your own manual VACUUM or maybe \neven VACUUM FREEZE cleanup in sync with the nightly processing if you \nwant that to be predictable.\n\n> If there are \"obviously correct\" choices in PG configuration, this would\n> be tremendously helpful information to me. I'm planning on using pgbench\n> to test the configuration options.\n\nThe info at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server is as \nuseful a checklist for getting started as any. Note that pgbench is a \nvery insensitive tool for testing configuration changes usefully. \nResults there will bounce around if you change shared_buffers and \ncheckpoint_segments, but not much else. And even the changes that test \npositive with it don't necessarily translate into better real-world \nperformance. For example, you might set shared_buffers to 8GB based on \npgbench TPS numbers going up as it increases, only to find that allows \nway too much memory to get dirty between a checkpoint in \nproduction--resulting in slow periods on the server.\n\nAnd many of the more interesting and tricky parameters to try and tweak \nin production, such as work_mem, don't even matter to what pgbench does. \n It's easy to get lost trying pgbench tests without making clear \nforward progress for weeks. Once you've validated the hardware seems to \nbe delivering reasonable performance, consider running your own more \napplication-like benchmarks instead.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Wed, 25 Apr 2012 20:11:23 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Below is the hardware, firmware, OS, and PG configuration pieces that I'm\nsettling in on. As was noted, the local storage used for OS is actually two\ndisks with RAID 10. If anything appears like a mistake or something is\nmissing, I'd appreciate the feedback.\n\nI'm still working on the benchmarks scripts and I don't have good/reliable\nnumbers yet since our SAN is still very busy reconfiguring from the 2x4 to\n1x8. I'm hoping to get them running tomorrow when the SAN should complete\nits 60 hours of reconfiguration.\n\nThanks, again, for all the great feedback.\n\n\nCheers,\n\nJan\n\n\n*System* HP ProLiant BL460c G7\n*BIOS* HP I27 05/05/2011\n*CPU Sockets* 2\n*Chips* Intel(R) Xeon(R) CPU X5650 @ 2.67GHz\n Intel(R) Xeon(R) CPU X5650 @ 2.67GHz\n*CPU Cores* 24\n*Kernel Name* Linux\n*Kernel Version* 2.6.32-220.el6.x86_64\n*Machine Platform* x86_64\n*Processor Type* x86_64\n*Operating System* GNU/Linux\n*Distribution* CentOS release 6.2 (Final)\n*Write barriers* libata version 3.00 loaded.\n*MemTotal* 49410668kB\n*PAGE_SIZE* 4096\n*_PHYS_PAGES* 12352667\n*kernel.shmall* 6176333\n*kernel.shmmax* 25298259968\n*kernel.sem* 250 32000 32 128\n*vm.swappiness* 0\n*vm.overcommit_memory* 2\n*dirty_ratio* 5\n*dirty_background_ratio* 2\n\n300GB RAID10 2x15k drive for OS on local storage\n*/dev/sda1 RA* 4096\n*/dev/sda1 FS* ext4\n*/dev/sda1 MO*\n\n600GB RAID 10 8x15k drive for $PGDATA on SAN\n*IO Scheduler sda* noop anticipatory deadline [cfq]\n*/dev/sdb1 RA* 4096\n*/dev/sdb1 FS* xfs\n*/dev/sdb1 MO*\nallocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n\n\n300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n*IO Scheduler sdb* noop anticipatory deadline [cfq]\n*/dev/sde1 RA* 4096\n*/dev/sde1 FS* xfs\n*/dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n*IO Scheduler sde* noop anticipatory deadline [cfq]\n\n\nPG Configuration\n\n*PG shared_buffers* 16GB\n*PG log_line_prefix* '%t:%u@%r=>%d:[%p]: '\n*PG log_statement* ddl\n*PG log_min_duration_statement* 1s\n*PG listen_addresses* *\n*PG checkpoint_segments* 32\n*PG checkpoint_completion_target* 0.9\n*PG max_connections* 100\n*PG max_fsm_relations*\n*PG max_fsm_pages*\n*PG wal_buffers* 16MB\n*PG wal_sync_method* open_sync\n*PG effective_cache_size* 32GB\n*PG random_page_cost* 4\n*PG constraint_exclusion* partition\n*PG work_mem* 64MB\n*PG maintenance_work_mem* 2GB\n\n\n\n\nOn Wed, Apr 25, 2012 at 3:29 PM, Shaun Thomas <[email protected]> wrote:\n>\n> On 04/25/2012 02:46 AM, John Lister wrote:\n>\n>> Hi, I'd be grateful if you could share any XFS performance tweaks as I'm\n>> not entirely sure I'm getting the most out of my setup and any\n>> additional guidance would be very helpful.\n>\n>\n> Ok, I'll give this with a huge caveat: these settings came from lots of\ntesting, both load and pgbench based. I'll explain as much as I can.\n>\n> For initializing the XFS filesystem, you can take advantage of a few\nsettings that are pretty handy.\n>\n> * -d agcount=256 - Higher amount of allocation groups works better with\nmulti-CPU systems. We used 256, but you'll want to do tests to confirm\nthis. The point is that you can have several threads writing to the\nfilesystem simultaneously.\n>\n> * -l lazy-count=1 - Log data is written more efficiently. Gives a\nmeasurable performance boost. Newer versions set this, but CentOS 5 has the\ndefault to 0. I'm not sure about CentOS 6. Just enable it. :)\n>\n> * -l version=2 - Forces the most recent version of the logging algorithm;\nallows a larger log buffer on mount. Since you're using CentOS, the default\nvalue is still probably 1, which you don't want.\n>\n> And then there are the mount options. These actually seemed to make more\nof an impact in our testing:\n>\n> * allocsize=256m - Database files are up to 1GB in size. To prevent\nfragmentation, always pre-allocate in 256MB chunks. In recent 3.0+ kernels,\nthis setting will result in phantom storage allocation as each file is\ninitially allocated with 256MB until all references have exited memory. Due\nto aggressive Linux inode cache behavior, this may not happen for several\nhours. On 3.0 kernels, this setting should be removed. I think the 2.6.35\nkernel had this backported, so *TEST THIS SETTING BEFORE USING IT!*\n>\n> * logbufs=8 - Forces more of the log buffer to remain in RAM, improving\nfile deletion performance. Good for temporary files. XFS often gets knocked\nfor file deletion performance, and this brings it way up. Not really an\nissue with PG usage, but handy anyway. See logbsize.\n>\n> * logbsize=256k - Larger log buffers keep track of modified files in\nmemory for better performance. See logbufs.\n>\n> * noatime - Negates touching the disk for file accesses. Reduces disk IO.\n>\n> * attr2 - Opportunistic improvement in the way inline extended attributes\nare stored on-disk. Not strictly necessary, but handy.\n>\n>\n> I'm hoping someone else will pipe in, because these settings are pretty\n\"old\" and based on a CentOS 5.5 setup. I haven't done any metrics on the\nnewer kernels, but I have followed enough to know allocsize is dangerous on\nnew systems.\n>\n> Your mileage may vary. :)\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer/ for terms and conditions\nrelated to this email\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nBelow is the hardware, firmware, OS, and PG configuration pieces that I'm settling in on. As was noted, the local storage used for OS is actually two disks with RAID 10. If anything appears like a mistake or something is missing, I'd appreciate the feedback. \nI'm still working on the benchmarks scripts and I don't have good/reliable numbers yet since our SAN is still very busy reconfiguring from the 2x4 to 1x8. I'm hoping to get them running tomorrow when the SAN should complete its 60 hours of reconfiguration.\nThanks, again, for all the great feedback.Cheers,Jan*System* HP ProLiant BL460c G7*BIOS* HP I27 05/05/2011\n*CPU Sockets* 2*Chips* Intel(R) Xeon(R) CPU X5650 @ 2.67GHz\n Intel(R) Xeon(R) CPU X5650 @ 2.67GHz*CPU Cores* 24\n*Kernel Name* Linux*Kernel Version* 2.6.32-220.el6.x86_64\n*Machine Platform* x86_64*Processor Type* x86_64\n*Operating System* GNU/Linux*Distribution* CentOS release 6.2 (Final)\n*Write barriers* libata version 3.00 loaded.*MemTotal* 49410668kB\n*PAGE_SIZE* 4096*_PHYS_PAGES* 12352667\n*kernel.shmall* 6176333*kernel.shmmax* 25298259968\n*kernel.sem* 250 32000 32 128*vm.swappiness* 0\n*vm.overcommit_memory* 2*dirty_ratio* 5\n*dirty_background_ratio* 2300GB RAID10 2x15k drive for OS on local storage\n*/dev/sda1 RA* 4096*/dev/sda1 FS* ext4 \n*/dev/sda1 MO*600GB RAID 10 8x15k drive for $PGDATA on SAN*IO Scheduler sda* noop anticipatory deadline [cfq]\n*/dev/sdb1 RA* 4096*/dev/sdb1 FS* xfs\n*/dev/sdb1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime \n300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN*IO Scheduler sdb* noop anticipatory deadline [cfq]\n*/dev/sde1 RA* 4096*/dev/sde1 FS* xfs\n*/dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime*IO Scheduler sde* noop anticipatory deadline [cfq]\nPG Configuration*PG shared_buffers* 16GB*PG log_line_prefix* '%t:%u@%r=>%d:[%p]: '\n*PG log_statement* ddl*PG log_min_duration_statement* 1s\n*PG listen_addresses* **PG checkpoint_segments* 32\n*PG checkpoint_completion_target* 0.9*PG max_connections* 100\n*PG max_fsm_relations* *PG max_fsm_pages* \n*PG wal_buffers* 16MB*PG wal_sync_method* open_sync\n*PG effective_cache_size* 32GB*PG random_page_cost* 4\n*PG constraint_exclusion* partition*PG work_mem* 64MB\n*PG maintenance_work_mem* 2GBOn Wed, Apr 25, 2012 at 3:29 PM, Shaun Thomas <[email protected]> wrote:\n>> On 04/25/2012 02:46 AM, John Lister wrote:>>> Hi, I'd be grateful if you could share any XFS performance tweaks as I'm>> not entirely sure I'm getting the most out of my setup and any\n>> additional guidance would be very helpful.>>> Ok, I'll give this with a huge caveat: these settings came from lots of testing, both load and pgbench based. I'll explain as much as I can.\n>> For initializing the XFS filesystem, you can take advantage of a few settings that are pretty handy.>> * -d agcount=256 - Higher amount of allocation groups works better with multi-CPU systems. We used 256, but you'll want to do tests to confirm this. The point is that you can have several threads writing to the filesystem simultaneously.\n>> * -l lazy-count=1 - Log data is written more efficiently. Gives a measurable performance boost. Newer versions set this, but CentOS 5 has the default to 0. I'm not sure about CentOS 6. Just enable it. :)\n>> * -l version=2 - Forces the most recent version of the logging algorithm; allows a larger log buffer on mount. Since you're using CentOS, the default value is still probably 1, which you don't want.\n>> And then there are the mount options. These actually seemed to make more of an impact in our testing:>> * allocsize=256m - Database files are up to 1GB in size. To prevent fragmentation, always pre-allocate in 256MB chunks. In recent 3.0+ kernels, this setting will result in phantom storage allocation as each file is initially allocated with 256MB until all references have exited memory. Due to aggressive Linux inode cache behavior, this may not happen for several hours. On 3.0 kernels, this setting should be removed. I think the 2.6.35 kernel had this backported, so *TEST THIS SETTING BEFORE USING IT!*\n>> * logbufs=8 - Forces more of the log buffer to remain in RAM, improving file deletion performance. Good for temporary files. XFS often gets knocked for file deletion performance, and this brings it way up. Not really an issue with PG usage, but handy anyway. See logbsize.\n>> * logbsize=256k - Larger log buffers keep track of modified files in memory for better performance. See logbufs.>> * noatime - Negates touching the disk for file accesses. Reduces disk IO.>\n> * attr2 - Opportunistic improvement in the way inline extended attributes are stored on-disk. Not strictly necessary, but handy.>>> I'm hoping someone else will pipe in, because these settings are pretty \"old\" and based on a CentOS 5.5 setup. I haven't done any metrics on the newer kernels, but I have followed enough to know allocsize is dangerous on new systems.\n>> Your mileage may vary. :)>>> --> Shaun Thomas> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604> 312-444-8534> [email protected]\n>> ______________________________________________>> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n>> --> Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 2 May 2012 20:10:03 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 03/05/2012 03:10, Jan Nielsen wrote:\n>\n> 300GB RAID10 2x15k drive for OS on local storage\n> */dev/sda1 RA* 4096\n> */dev/sda1 FS* ext4\n> */dev/sda1 MO*\n>\n> 600GB RAID 10 8x15k drive for $PGDATA on SAN\n> *IO Scheduler sda* noop anticipatory deadline [cfq]\n> */dev/sdb1 RA* 4096\n> */dev/sdb1 FS* xfs\n> */dev/sdb1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>\n> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n> *IO Scheduler sdb* noop anticipatory deadline [cfq]\n> */dev/sde1 RA* 4096\n> */dev/sde1 FS* xfs\n> */dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n> *\n\nI was wondering if it would be better to put the xlog on the same disk \nas the OS? Apart from the occasional log writes I'd have thought most OS \ndata is loaded into cache at the beginning, so you effectively have an \nunused disk. This gives you another spindle (mirrored) for your data.\n\nOr have I missed something fundamental?\n\n-- \nwww.pricegoblin.co.uk\n\n\n\n\n\n\n\n On 03/05/2012 03:10, Jan Nielsen wrote:\n \n300GB\n RAID10 2x15k drive for OS on local storage\n*/dev/sda1\n RA* \n 4096\n*/dev/sda1\n FS* ext4 \n */dev/sda1 MO*\n\n600GB RAID\n 10 8x15k drive for $PGDATA on SAN\n*IO Scheduler\n sda* noop anticipatory deadline [cfq]\n*/dev/sdb1\n RA* 4096\n*/dev/sdb1\n FS* xfs\n */dev/sdb1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime \n\n 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n*IO\n Scheduler sdb* noop anticipatory deadline [cfq]\n*/dev/sde1\n RA* 4096\n*/dev/sde1 FS* \n xfs\n*/dev/sde1\n MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n*\n\n\n I was wondering if it would be better to put the xlog on the same\n disk as the OS? Apart from the occasional log writes I'd have\n thought most OS data is loaded into cache at the beginning, so you\n effectively have an unused disk. This gives you another spindle\n (mirrored) for your data.\n\n Or have I missed something fundamental?\n\n-- \nwww.pricegoblin.co.uk",
"msg_date": "Thu, 03 May 2012 07:54:10 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Hi Jan,\n\nOn Thu, May 3, 2012 at 4:10 AM, Jan Nielsen <[email protected]> wrote:\n> Below is the hardware, firmware, OS, and PG configuration pieces that I'm\n> settling in on. As was noted, the local storage used for OS is actually two\n> disks with RAID 10. If anything appears like a mistake or something is\n> missing, I'd appreciate the feedback.\n\nYou should quickly patent this solution. As far as I know you need at\nleast four disks for RAID 10. :-)\nhttp://en.wikipedia.org/wiki/RAID#Nested_.28hybrid.29_RAID\n\nOr did you mean RAID 1?\n\n> I'm still working on the benchmarks scripts and I don't have good/reliable\n> numbers yet since our SAN is still very busy reconfiguring from the 2x4 to\n> 1x8. I'm hoping to get them running tomorrow when the SAN should complete\n> its 60 hours of reconfiguration.\n\nYeah, does not seem to make a lot of sense to test during this phase.\n\n> Thanks, again, for all the great feedback.\n\nYou're welcome!\n\n> 300GB RAID10 2x15k drive for OS on local storage\n> */dev/sda1 RA* 4096\n> */dev/sda1 FS* ext4\n> */dev/sda1 MO*\n\nSee above.\n\n> 600GB RAID 10 8x15k drive for $PGDATA on SAN\n> *IO Scheduler sda* noop anticipatory deadline [cfq]\n> */dev/sdb1 RA* 4096\n> */dev/sdb1 FS* xfs\n> */dev/sdb1 MO*\n> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>\n> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n> *IO Scheduler sdb* noop anticipatory deadline [cfq]\n> */dev/sde1 RA* 4096\n> */dev/sde1 FS* xfs\n> */dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n> *IO Scheduler sde* noop anticipatory deadline [cfq]\n\nSee above.\n\nWith regard to the scheduler, I have frequently read that [deadline]\nand [noop] perform better for PG loads. Fortunately this can be\neasily changed.\n\nMaybe this also has some additional input:\nhttp://www.fccps.cz/download/adv/frr/hdd/hdd.html\n\nOn Thu, May 3, 2012 at 8:54 AM, John Lister <[email protected]> wrote:\n> I was wondering if it would be better to put the xlog on the same disk as\n> the OS? Apart from the occasional log writes I'd have thought most OS data\n> is loaded into cache at the beginning, so you effectively have an unused\n> disk. This gives you another spindle (mirrored) for your data.\n>\n> Or have I missed something fundamental?\n\nSeparating avoids interference between OS and WAL logging (i.e. a\nscript running berserk and filling OS filesystem). Also it's easier\nto manage (e.g. in case of relocation to another volume etc.). And\nyou can have different mount options (i.e. might want to have atime\nfor OS volume).\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 3 May 2012 09:28:14 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 05/03/2012 02:28 AM, Robert Klemme wrote:\n\n> Maybe this also has some additional input:\n> http://www.fccps.cz/download/adv/frr/hdd/hdd.html\n\nBe careful with that link. His recommendations for dirty_ratio and \ndirty_background_ratio would be *very bad* in a database setting. Note \nthis from the actual article:\n\n\"I am aware that my tuning values are probably quite insane in some \nrespects, may cause occasional longer periods of high read latency, may \ncause other problems. Still I guess the exercise was worth it - the \ntests did show some interesting results.\"\n\nThat's putting it lightly. With some of those settings in a very large \nmemory server, you could see *minutes* of synchronous IO waits if \ndirty_ratio gets saturated. I like to follow this:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nAs a note, there are actually new tunables for some of this: \ndirty_bytes, and dirty_background_bytes. With them, you can match them \nbetter to the actual size of your controller write cache so you can \navoid page flush storms causing IO stalls. It's unfortunate, but \ndatabase servers are not the target platform for most of the kernel \ndevs, and really have a much different profile from everyday systems. We \nneed to address latency more than throughput, though both are important.\n\nI think Greg mentioned something that setting these too low can cause \nVACUUM to lag, but I'm willing to take that tradeoff. We've had IO \nstalls in the past when our background ratio was too high, and it wasn't \npretty. Ironically, we never had a problem until we tripled our system \nmemory, and suddenly our drive controllers were frequently getting \nchoked to death.\n\nMr. Nielsen's setup actually looks pretty darn good. It's my personal \nopinion he might run into some IO waits if he plans to use this for \nheavy OLTP, thanks to having only 8 spindles in his RAID1+0, but he may \neventually grow into a SAN. That's fine. It's a good starting point.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 3 May 2012 08:05:33 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Hi Robert,\n\nOn Thu, May 3, 2012 at 1:28 AM, Robert Klemme <[email protected]>wrote:\n\n> Hi Jan,\n>\n> On Thu, May 3, 2012 at 4:10 AM, Jan Nielsen <[email protected]>\n> wrote:\n> > Below is the hardware, firmware, OS, and PG configuration pieces that I'm\n> > settling in on. As was noted, the local storage used for OS is actually\n> two\n> > disks with RAID 10. If anything appears like a mistake or something is\n> > missing, I'd appreciate the feedback.\n>\n> You should quickly patent this solution. As far as I know you need at\n> least four disks for RAID 10. :-)\n> http://en.wikipedia.org/wiki/RAID#Nested_.28hybrid.29_RAID\n>\n> Or did you mean RAID 1?\n>\n\nUgh - yeah - sorry. RAID-1 for the 2-disk OS and WAL.\n\n\n> > I'm still working on the benchmarks scripts and I don't have\n> good/reliable\n> > numbers yet since our SAN is still very busy reconfiguring from the 2x4\n> to\n> > 1x8. I'm hoping to get them running tomorrow when the SAN should complete\n> > its 60 hours of reconfiguration.\n>\n> Yeah, does not seem to make a lot of sense to test during this phase.\n>\n> > Thanks, again, for all the great feedback.\n>\n> You're welcome!\n>\n> > 300GB RAID10 2x15k drive for OS on local storage\n>\n\nCorrection: RAID-1 on the 2x15k local storage device for OS\n\n\n> > */dev/sda1 RA* 4096\n> > */dev/sda1 FS* ext4\n> > */dev/sda1 MO*\n>\n> See above.\n>\n> > 600GB RAID 10 8x15k drive for $PGDATA on SAN\n>\n\nClarification: RAID-10 on the 8x15k SAN device for $PGDATA\n\n\n> > *IO Scheduler sda* noop anticipatory deadline [cfq]\n> > */dev/sdb1 RA* 4096\n> > */dev/sdb1 FS* xfs\n> > */dev/sdb1 MO*\n> > allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n> >\n> > 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n>\n\nCorrection: RAID-1 on the 2x15k SAN device for $PGDATA/pg_log\n\n\n> > *IO Scheduler sdb* noop anticipatory deadline [cfq]\n> > */dev/sde1 RA* 4096\n> > */dev/sde1 FS* xfs\n> > */dev/sde1 MO*\n> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n> > *IO Scheduler sde* noop anticipatory deadline [cfq]\n>\n> See above.\n>\n> With regard to the scheduler, I have frequently read that [deadline]\n> and [noop] perform better for PG loads. Fortunately this can be\n> easily changed.\n>\n> Maybe this also has some additional input:\n> http://www.fccps.cz/download/adv/frr/hdd/hdd.html\n>\n\nThanks for the reference, Robert.\n\n\n> On Thu, May 3, 2012 at 8:54 AM, John Lister <[email protected]>\n> wrote:\n> > I was wondering if it would be better to put the xlog on the same disk as\n> > the OS? Apart from the occasional log writes I'd have thought most OS\n> data\n> > is loaded into cache at the beginning, so you effectively have an unused\n> > disk. This gives you another spindle (mirrored) for your data.\n> >\n> > Or have I missed something fundamental?\n>\n> Separating avoids interference between OS and WAL logging (i.e. a\n> script running berserk and filling OS filesystem). Also it's easier\n> to manage (e.g. in case of relocation to another volume etc.). And\n> you can have different mount options (i.e. might want to have atime\n> for OS volume).\n>\n> Kind regards\n>\n> robert\n>\n>\n> --\n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n>\n\nHi Robert,On Thu, May 3, 2012 at 1:28 AM, Robert Klemme <[email protected]> wrote:\nHi Jan,\n\nOn Thu, May 3, 2012 at 4:10 AM, Jan Nielsen <[email protected]> wrote:\n> Below is the hardware, firmware, OS, and PG configuration pieces that I'm\n> settling in on. As was noted, the local storage used for OS is actually two\n> disks with RAID 10. If anything appears like a mistake or something is\n> missing, I'd appreciate the feedback.\n\nYou should quickly patent this solution. As far as I know you need at\nleast four disks for RAID 10. :-)\nhttp://en.wikipedia.org/wiki/RAID#Nested_.28hybrid.29_RAID\n\nOr did you mean RAID 1?Ugh - yeah - sorry. RAID-1 for the 2-disk OS and WAL. \n\n> I'm still working on the benchmarks scripts and I don't have good/reliable\n> numbers yet since our SAN is still very busy reconfiguring from the 2x4 to\n> 1x8. I'm hoping to get them running tomorrow when the SAN should complete\n> its 60 hours of reconfiguration.\n\nYeah, does not seem to make a lot of sense to test during this phase.\n\n> Thanks, again, for all the great feedback.\n\nYou're welcome!\n\n> 300GB RAID10 2x15k drive for OS on local storageCorrection: RAID-1 on the 2x15k local storage device for OS \n\n> */dev/sda1 RA* 4096\n> */dev/sda1 FS* ext4\n> */dev/sda1 MO*\n\nSee above.\n\n> 600GB RAID 10 8x15k drive for $PGDATA on SANClarification: RAID-10 on the 8x15k SAN device for $PGDATA \n\n> *IO Scheduler sda* noop anticipatory deadline [cfq]\n> */dev/sdb1 RA* 4096\n> */dev/sdb1 FS* xfs\n> */dev/sdb1 MO*\n> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>\n> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SANCorrection: RAID-1 on the 2x15k SAN device for $PGDATA/pg_log \n\n> *IO Scheduler sdb* noop anticipatory deadline [cfq]\n> */dev/sde1 RA* 4096\n> */dev/sde1 FS* xfs\n> */dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n> *IO Scheduler sde* noop anticipatory deadline [cfq]\n\nSee above.\n\nWith regard to the scheduler, I have frequently read that [deadline]\nand [noop] perform better for PG loads. Fortunately this can be\neasily changed.\n\nMaybe this also has some additional input:\nhttp://www.fccps.cz/download/adv/frr/hdd/hdd.htmlThanks for the reference, Robert. \n\nOn Thu, May 3, 2012 at 8:54 AM, John Lister <[email protected]> wrote:\n> I was wondering if it would be better to put the xlog on the same disk as\n> the OS? Apart from the occasional log writes I'd have thought most OS data\n> is loaded into cache at the beginning, so you effectively have an unused\n> disk. This gives you another spindle (mirrored) for your data.\n>\n> Or have I missed something fundamental?\n\nSeparating avoids interference between OS and WAL logging (i.e. a\nscript running berserk and filling OS filesystem). Also it's easier\nto manage (e.g. in case of relocation to another volume etc.). And\nyou can have different mount options (i.e. might want to have atime\nfor OS volume).\n\nKind regards\n\nrobert\n\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/",
"msg_date": "Thu, 3 May 2012 07:14:25 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Thu, May 3, 2012 at 7:05 AM, Shaun Thomas <[email protected]>wrote:\n\n> I like to follow this:\n>\n> http://www.westnet.com/~**gsmith/content/linux-pdflush.**htm<http://www.westnet.com/%7Egsmith/content/linux-pdflush.htm>\n>\n\nThanks for the reference, Shaun.\n\n\n> As a note, there are actually new tunables for some of this: dirty_bytes,\n> and dirty_background_bytes. With them, you can match them better to the\n> actual size of your controller write cache so you can avoid page flush\n> storms causing IO stalls.\n\n\nThat sounds interesting. How do you identify a page flush storm?\n\n\n> Mr. Nielsen's setup actually looks pretty darn good. It's my personal\n> opinion he might run into some IO waits if he plans to use this for heavy\n> OLTP, thanks to having only 8 spindles in his RAID1+0, but he may\n> eventually grow into a SAN. That's fine. It's a good starting point.\n\n\nCool - thanks, again, for the review, Shaun.\n\n\nCheers,\n\nJan\n\n\n\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for terms and conditions related to this email\n>\n\nOn Thu, May 3, 2012 at 7:05 AM, Shaun Thomas <[email protected]> wrote: \n I like to follow this:\n\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htmThanks for the reference, Shaun.\n \n\nAs a note, there are actually new tunables for some of this: dirty_bytes, and dirty_background_bytes. With them, you can match them better to the actual size of your controller write cache so you can avoid page flush storms causing IO stalls. \nThat sounds interesting. How do you identify a page flush storm? Mr. Nielsen's setup actually looks pretty darn good. It's my personal opinion he might run into some IO waits if he plans to use this for heavy OLTP, thanks to having only 8 spindles in his RAID1+0, but he may eventually grow into a SAN. That's fine. It's a good starting point.\nCool - thanks, again, for the review, Shaun.Cheers,Jan \n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Thu, 3 May 2012 07:30:56 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Hi John,\n\nOn Thu, May 3, 2012 at 12:54 AM, John Lister <[email protected]>wrote:\n\n> On 03/05/2012 03:10, Jan Nielsen wrote:\n>\n>\n> 300GB RAID10 2x15k drive for OS on local storage\n> */dev/sda1 RA* 4096\n> */dev/sda1 FS* ext4\n> */dev/sda1 MO*\n>\n> 600GB RAID 10 8x15k drive for $PGDATA on SAN\n> *IO Scheduler sda* noop anticipatory deadline [cfq]\n> */dev/sdb1 RA* 4096\n> */dev/sdb1 FS* xfs\n> */dev/sdb1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>\n>\n> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n> *IO Scheduler sdb* noop anticipatory deadline [cfq]\n> */dev/sde1 RA* 4096\n> */dev/sde1 FS* xfs\n> */dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n> *\n>\n>\n> I was wondering if it would be better to put the xlog on the same disk as\n> the OS? Apart from the occasional log writes I'd have thought most OS data\n> is loaded into cache at the beginning, so you effectively have an unused\n> disk. This gives you another spindle (mirrored) for your data.\n>\n> Or have I missed something fundamental?\n>\n\nI followed Gregory Smith's arguments from PostgreSQL 9.0 High Performance,\nwherein he notes that WAL is sequential with constant cache flushes whereas\nOS is a mix of sequential and random with rare cache flushes. This (might)\nlead one to conclude that separating these would be good for at least the\nWAL and likely both. Regardless, separating these very different\nuse-patterns seems like a \"Good Thing\" if tuning is ever needed for either.\n\n\nCheers,\n\nJan\n\n\n\n>\n> -- www.pricegoblin.co.uk\n>\n>\n\nHi John,On Thu, May 3, 2012 at 12:54 AM, John Lister <[email protected]> wrote:\n\n On 03/05/2012 03:10, Jan Nielsen wrote:\n \n300GB\n RAID10 2x15k drive for OS on local storage\n*/dev/sda1\n RA* \n 4096\n*/dev/sda1\n FS* ext4 \n */dev/sda1 MO*\n\n600GB RAID\n 10 8x15k drive for $PGDATA on SAN\n*IO Scheduler\n sda* noop anticipatory deadline [cfq]\n*/dev/sdb1\n RA* 4096\n*/dev/sdb1\n FS* xfs\n */dev/sdb1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime \n\n 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n*IO\n Scheduler sdb* noop anticipatory deadline [cfq]\n*/dev/sde1\n RA* 4096\n*/dev/sde1 FS* \n xfs\n*/dev/sde1\n MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n*\n\n\n I was wondering if it would be better to put the xlog on the same\n disk as the OS? Apart from the occasional log writes I'd have\n thought most OS data is loaded into cache at the beginning, so you\n effectively have an unused disk. This gives you another spindle\n (mirrored) for your data.\n\n Or have I missed something fundamental?I followed Gregory Smith's arguments from PostgreSQL 9.0 High Performance, wherein he notes that WAL is sequential with constant cache flushes whereas OS is a mix of sequential and random with rare cache flushes. This (might) lead one to conclude that separating these would be good for at least the WAL and likely both. Regardless, separating these very different use-patterns seems like a \"Good Thing\" if tuning is ever needed for either.\nCheers,Jan \n\n-- \nwww.pricegoblin.co.uk",
"msg_date": "Thu, 3 May 2012 07:42:14 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "> That sounds interesting. How do you identify a page flush storm?\n\nMaybe I used the wrong terminology. What effectively happens if you reach the amount of memory specified in dirty_ratio, is that the system goes from asynchronous disk access, to synchronous disk access, and starts flushing that memory to disk. Until that operation completes, all other actions requiring disk access are paused.\n\nYou really, really don't want that to happen during a busy day on an OLTP system unless you have an absolutely gargantuan cash. We first noticed it after we upgraded from 32GB to 96GB. We have enough connections and other effects, that the inode cache pool was only about 16GB. Take 40% of that (default CentOS 5.x) and you get 6GB. Not great, but enough you might be able to get by without actually noticing the pauses. After tripling our memory, the database still used 16GB, but suddenly our inode cache jumped from 16GB to 80GB. 40% of that is 32GB, and there's no way our 512MB controller cache could try to swallow that without us noticing.\n\nThings got much better when we set dirty_background_ratio to 1, and dirty_ratio to 10. That might be a tad too aggressive, but it worked for us.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 3 May 2012 13:52:02 +0000",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Thu, May 3, 2012 at 6:42 AM, Jan Nielsen <[email protected]> wrote:\n> Hi John,\n>\n> On Thu, May 3, 2012 at 12:54 AM, John Lister <[email protected]>\n> wrote:\n>>\n>> On 03/05/2012 03:10, Jan Nielsen wrote:\n>>\n>>\n>> 300GB RAID10 2x15k drive for OS on local storage\n>> */dev/sda1 RA* 4096\n>> */dev/sda1 FS* ext4\n>> */dev/sda1 MO*\n>>\n>> 600GB RAID 10 8x15k drive for $PGDATA on SAN\n>> *IO Scheduler sda* noop anticipatory deadline [cfq]\n>> */dev/sdb1 RA* 4096\n>> */dev/sdb1 FS* xfs\n>> */dev/sdb1 MO*\n>> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>>\n>> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n>> *IO Scheduler sdb* noop anticipatory deadline [cfq]\n>> */dev/sde1 RA* 4096\n>> */dev/sde1 FS* xfs\n>> */dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>> *\n>>\n>>\n>> I was wondering if it would be better to put the xlog on the same disk as\n>> the OS? Apart from the occasional log writes I'd have thought most OS data\n>> is loaded into cache at the beginning, so you effectively have an unused\n>> disk. This gives you another spindle (mirrored) for your data.\n>>\n>> Or have I missed something fundamental?\n>\n>\n> I followed Gregory Smith's arguments from PostgreSQL 9.0 High Performance,\n> wherein he notes that WAL is sequential with constant cache flushes whereas\n> OS is a mix of sequential and random with rare cache flushes. This (might)\n> lead one to conclude that separating these would be good for at least the\n> WAL and likely both. Regardless, separating these very different\n> use-patterns seems like a \"Good Thing\" if tuning is ever needed for either.\n\nAnother consideration is journaling vs. non-journaling file systems.\nIf the WAL is on its own file system (not necessarily its own\nspindle), you can use a non-journaling file system like ext2. The WAL\nis actually quite small and is itself a journal, so there's no reason\nto use a journaling file system. On the other hand, you don't want\nthe operating system on ext2 because it takes a long time to recover\nfrom a crash.\n\nI think you're right about the OS: once it starts, there is very\nlittle disk activity. I'd say put both on the same disk but on\ndifferent partitions. The OS can use ext4 or some other modern\njournaling file system, and the WAL can use ext2. This also means you\ncan put the WAL on the outer (fastest) part of the disk and leave the\nslow inner tracks for the OS.\n\nCraig\n",
"msg_date": "Thu, 3 May 2012 08:46:25 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 03/05/2012 16:46, Craig James wrote:\n> On Thu, May 3, 2012 at 6:42 AM, Jan Nielsen<[email protected]> wrote:\n>> Hi John,\n>>\n>> On Thu, May 3, 2012 at 12:54 AM, John Lister<[email protected]>\n>> wrote:\n>>> I was wondering if it would be better to put the xlog on the same disk as\n>>> the OS? Apart from the occasional log writes I'd have thought most OS data\n>>> is loaded into cache at the beginning, so you effectively have an unused\n>>> disk. This gives you another spindle (mirrored) for your data.\n>>>\n>>> Or have I missed something fundamental?\n>> I followed Gregory Smith's arguments from PostgreSQL 9.0 High Performance,\n>> wherein he notes that WAL is sequential with constant cache flushes whereas\n>> OS is a mix of sequential and random with rare cache flushes. This (might)\n>> lead one to conclude that separating these would be good for at least the\n>> WAL and likely both. Regardless, separating these very different\n>> use-patterns seems like a \"Good Thing\" if tuning is ever needed for either.\n> Another consideration is journaling vs. non-journaling file systems.\n> If the WAL is on its own file system (not necessarily its own\n> spindle), you can use a non-journaling file system like ext2. The WAL\n> is actually quite small and is itself a journal, so there's no reason\n> to use a journaling file system. On the other hand, you don't want\n> the operating system on ext2 because it takes a long time to recover\n> from a crash.\n>\n> I think you're right about the OS: once it starts, there is very\n> little disk activity. I'd say put both on the same disk but on\n> different partitions. The OS can use ext4 or some other modern\n> journaling file system, and the WAL can use ext2. This also means you\n> can put the WAL on the outer (fastest) part of the disk and leave the\n> slow inner tracks for the OS.\n>\nSorry I wasn't clear, I was thinking that the WAL and OS would go on \ndifferent partitions (for the reasons stated previously that the OS \ncould fill its partition) but that they share a disk/spindle - some \nnomenclature issues here I think. This would free up another (pair of) \nspindle(s) for the data which would seem much more beneficial in terms \nof performance than the WAL being separate...\n\nIn terms of the caching issues, I'm guessing that you would be sharing \nthe same cache regardless of whether the OS and WAL are on the same \ndisk(s) or not - unless you stick the WAL on a separate raid/disk \ncontroller to the OS...\n\nJohn\n\n-- \nwww.pricegoblin.co.uk\n\n",
"msg_date": "Thu, 03 May 2012 21:27:34 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "\n\nOn 5/3/12 8:46 AM, \"Craig James\" <[email protected]> wrote:\n\n>On Thu, May 3, 2012 at 6:42 AM, Jan Nielsen <[email protected]>\n>wrote:\n>> Hi John,\n>>\n>> On Thu, May 3, 2012 at 12:54 AM, John Lister\n>><[email protected]>\n>> wrote:\n>>>\n>>> On 03/05/2012 03:10, Jan Nielsen wrote:\n>>>\n>>>\n>>> 300GB RAID10 2x15k drive for OS on local storage\n>>> */dev/sda1 RA* 4096\n>>> */dev/sda1 FS* ext4\n>>> */dev/sda1 MO*\n>>>\n>>> 600GB RAID 10 8x15k drive for $PGDATA on SAN\n>>> *IO Scheduler sda* noop anticipatory deadline [cfq]\n>>> */dev/sdb1 RA* 4096\n>>> */dev/sdb1 FS* xfs\n>>> */dev/sdb1 MO*\n>>> allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>>>\n>>> 300GB RAID 10 2x15k drive for $PGDATA/pg_xlog on SAN\n>>> *IO Scheduler sdb* noop anticipatory deadline [cfq]\n>>> */dev/sde1 RA* 4096\n>>> */dev/sde1 FS* xfs\n>>> */dev/sde1 MO* \n>>>allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n>>> *\n>>>\n>>>\n>>> I was wondering if it would be better to put the xlog on the same disk\n>>>as\n>>> the OS? Apart from the occasional log writes I'd have thought most OS\n>>>data\n>>> is loaded into cache at the beginning, so you effectively have an\n>>>unused\n>>> disk. This gives you another spindle (mirrored) for your data.\n>>>\n>>> Or have I missed something fundamental?\n>>\n>>\n>> I followed Gregory Smith's arguments from PostgreSQL 9.0 High\n>>Performance,\n>> wherein he notes that WAL is sequential with constant cache flushes\n>>whereas\n>> OS is a mix of sequential and random with rare cache flushes. This\n>>(might)\n>> lead one to conclude that separating these would be good for at least\n>>the\n>> WAL and likely both. Regardless, separating these very different\n>> use-patterns seems like a \"Good Thing\" if tuning is ever needed for\n>>either.\n>\n>Another consideration is journaling vs. non-journaling file systems.\n\nNot really. ext4 with journaling on is faster than ext2 with it off.\next2 should never be used if ext4 is available.\n\nIf you absolutely refuse to have a journal, turn the journal in ext4 off\nand have a faster and safer file system than ext2.\next2 should never be used if ext4 is available.\n\n>If the WAL is on its own file system (not necessarily its own\n>spindle), you can use a non-journaling file system like ext2. The WAL\n>is actually quite small and is itself a journal, so there's no reason\n>to use a journaling file system. On the other hand, you don't want\n>the operating system on ext2 because it takes a long time to recover\n>from a crash.\n>\n>I think you're right about the OS: once it starts, there is very\n>little disk activity. I'd say put both on the same disk but on\n>different partitions. The OS can use ext4 or some other modern\n>journaling file system, and the WAL can use ext2. This also means you\n>can put the WAL on the outer (fastest) part of the disk and leave the\n>slow inner tracks for the OS.\n>\n>Craig\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 3 May 2012 22:09:11 +0000",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "\nOn 4/25/12 2:29 PM, \"Shaun Thomas\" <[email protected]> wrote:\n\n>On 04/25/2012 02:46 AM, John Lister wrote:\n>\n>> Hi, I'd be grateful if you could share any XFS performance tweaks as I'm\n>> not entirely sure I'm getting the most out of my setup and any\n>> additional guidance would be very helpful.\n>\n>Ok, I'll give this with a huge caveat: these settings came from lots of\n>testing, both load and pgbench based. I'll explain as much as I can.\n\nThe configured file system read-ahead is also an important factor -- how\nimportant is sequential scan performance? More read-ahead (up to a point)\nbiases your I/O for sequential throughput. The deadline scheduler is also\nbiased slightly for throughput, meaning it will sacrifice some random iops\nin order to get a sequential scan out of the way.\n\nWe have a couple systems that have aged a long time on XFS and ext3. Over\ntime, XFS slaughters ext3. This is due primarily to one feature: online\ndefragmentation. our ext3 systems are so horribly fragmented that\nsequential scans almost no longer exist. ext4 is supposed to be better at\npreventing fragmentation, but there is no online defragmenter. After a\nparallel restore, postgres is rather fragmented. XFS can correct that,\nand disk throughput for sequential scans increases significantly after\ndefragmentation. We schedule defragmentation passes nightly, which do\nnot take long after the initial pass.\n\n>\n>For initializing the XFS filesystem, you can take advantage of a few\n>settings that are pretty handy.\n>\n>* -d agcount=256 - Higher amount of allocation groups works better with\n>multi-CPU systems. We used 256, but you'll want to do tests to confirm\n>this. The point is that you can have several threads writing to the\n>filesystem simultaneously.\n>\n>* -l lazy-count=1 - Log data is written more efficiently. Gives a\n>measurable performance boost. Newer versions set this, but CentOS 5 has\n>the default to 0. I'm not sure about CentOS 6. Just enable it. :)\n>\n>* -l version=2 - Forces the most recent version of the logging\n>algorithm; allows a larger log buffer on mount. Since you're using\n>CentOS, the default value is still probably 1, which you don't want.\n>\n>And then there are the mount options. These actually seemed to make more\n>of an impact in our testing:\n>\n>* allocsize=256m - Database files are up to 1GB in size. To prevent\n>fragmentation, always pre-allocate in 256MB chunks. In recent 3.0+\n>kernels, this setting will result in phantom storage allocation as each\n>file is initially allocated with 256MB until all references have exited\n>memory. Due to aggressive Linux inode cache behavior, this may not\n>happen for several hours. On 3.0 kernels, this setting should be\n>removed. I think the 2.6.35 kernel had this backported, so *TEST THIS\n>SETTING BEFORE USING IT!*\n>\n>* logbufs=8 - Forces more of the log buffer to remain in RAM, improving\n>file deletion performance. Good for temporary files. XFS often gets\n>knocked for file deletion performance, and this brings it way up. Not\n>really an issue with PG usage, but handy anyway. See logbsize.\n>\n>* logbsize=256k - Larger log buffers keep track of modified files in\n>memory for better performance. See logbufs.\n>\n>* noatime - Negates touching the disk for file accesses. Reduces disk IO.\n>\n>* attr2 - Opportunistic improvement in the way inline extended\n>attributes are stored on-disk. Not strictly necessary, but handy.\n>\n>\n>I'm hoping someone else will pipe in, because these settings are pretty\n>\"old\" and based on a CentOS 5.5 setup. I haven't done any metrics on the\n>newer kernels, but I have followed enough to know allocsize is dangerous\n>on new systems.\n>\n>Your mileage may vary. :)\n>\n>-- \n>Shaun Thomas\n>OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n>312-444-8534\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer/ for terms and conditions\n>related to this email\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 3 May 2012 22:16:54 +0000",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Starting to get some quantitative data now. Here is the results from the\npgbench scaling:\n\n pgbench -t 2000 -c 32 -S pgbench\n\nfor scales of 2^n where n=0..14 for scale, DB size in MB, and transactions\nper second:\n\nScale DB Size TPS\n-------------------\n 1 21 65618\n 2 36 66060\n 4 66 65939\n 8 125 66469\n 16 245 67065\n 32 484 60764\n 64 963 64676\n 128 1920 68151\n 256 3834 65933\n 512 7662 50777\n 1024 15360 66717\n 2048 30720 62811\n 4096 61440 5558\n 8192 122880 1854\n\nThe range 2048-8192 is an area to study in more detail, obviously. Feedback\nwelcome.\n\n\nCheers,\n\nJan\n\n\n*System* HP ProLiant BL460c G7\n*BIOS* HP I27 05/05/2011\n*CPU Sockets* 2\n*Chips* Intel(R) Xeon(R) CPU X5650 @ 2.67GHz\n Intel(R) Xeon(R) CPU X5650 @ 2.67GHz\n*CPU Cores* 24\n*Kernel Name* Linux\n*Kernel Version* 2.6.32-220.el6.x86_64\n*Machine Platform* x86_64\n*Processor Type* x86_64\n*Operating System* GNU/Linux\n*Distribution* CentOS release 6.2 (Final)\n*Write barriers* libata version 3.00 loaded.\n*MemTotal* 49410668kB\n*PAGE_SIZE* 4096\n*_PHYS_PAGES* 12352667\n*kernel.shmall* 6176333\n*kernel.shmmax* 25298259968\n*kernel.sem* 250 32000 32 128\n*vm.swappiness* 0\n*vm.overcommit_memory* 2\n*dirty_ratio* 5\n*dirty_background_ratio* 2\n\n300GB RAID1 2x15k drive for OS on local storage\n*/dev/sda1 RA* 4096\n*/dev/sda1 FS* ext4\n*/dev/sda1 MO*\n*IO Scheduler sda* noop anticipatory deadline [cfq]\n\n600GB RAID1+0 8x15k drive for $PGDATA on SAN\n*/dev/sdb1 RA* 4096\n*/dev/sdb1 FS* xfs\n*/dev/sdb1 MO*\nallocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n\n*IO Scheduler sdb* noop anticipatory deadline [cfq]\n\n300GB RAID1 2x15k drive for $PGDATA/pg_xlog on SAN\n*/dev/sde1 RA* 4096\n*/dev/sde1 FS* xfs\n*/dev/sde1 MO* allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime\n*IO Scheduler sde* noop anticipatory deadline [cfq]\n\n\nPG Configuration\n\n*PG shared_buffers* 16GB\n*PG log_line_prefix* '%t:%u@%r=>%d:[%p]: '\n*PG log_statement* ddl\n*PG log_min_duration_statement* 1s\n*PG listen_addresses* *\n*PG checkpoint_segments* 32\n*PG checkpoint_completion_target* 0.9\n*PG max_connections* 100\n*PG max_fsm_relations*\n*PG max_fsm_pages*\n*PG wal_buffers* 16MB\n*PG wal_sync_method* open_sync\n*PG effective_cache_size* 32GB\n*PG random_page_cost* 4\n*PG constraint_exclusion* partition\n*PG work_mem* 32MB\n*PG maintenance_work_mem* 2GB",
"msg_date": "Fri, 4 May 2012 09:07:23 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Jan Nielsen <[email protected]> wrote:\n \n> The range 2048-8192 is an area to study in more detail, obviously.\n> Feedback welcome.\n \nI don't see what's to study there, really. Performance drops off\nwhen database size grows from 30GB to 60GB on a system with 48GB\nRAM. And even more when you double database size again. Access to\ndisk is slower than access to system RAM. Is there something else I\nshould notice that I'm missing?\n \nThe local dips in the list suggest that you're not controlling for\ncheckpoints or autovacuum runs as well as you might, or that you're\nnot using a large enough number of runs at each scale.\n \n-Kevin\n",
"msg_date": "Fri, 04 May 2012 10:33:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Fri, May 4, 2012 at 8:07 AM, Jan Nielsen <[email protected]> wrote:\n> Starting to get some quantitative data now. Here is the results from the\n> pgbench scaling:\n>\n> pgbench -t 2000 -c 32 -S pgbench\n\nA single thread of pgbench is probably not enough to saturate 32\nsessions. What if you try -j 16 or -j 32?\n\nAlso, -t 2000 is mighty low.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 4 May 2012 09:30:57 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n>>> Is it established practice in the Postgres world to separate indexes\n>>> from tables? I would assume that the reasoning of Richard Foote -\n>>> albeit for Oracle databases - is also true for Postgres:\n>\n>> Yes, it's an established practice. I'd call it something just short of\n>> a best practice though, as it really depends on your situation.\n>\n> What are the benefits?\n\nDisk seeks, basically. Yes, there are a lot of complications regarding \nall the various hardware/OS/PG level cachings, but at the end of the \nday, it's less work to have each drive concentrate on a single area \n(especially as we always require a heap scan at the moment).\n\n>> I also find his examples a bit contrived, and the whole \"multi-user\"\n>> argument irrelevant for common cases.\n>\n> Why is that?\n\nBecause most Postgres servers are dedicated to serving the same data \nor sets of data, and the number of \"other users\" calling ad-hoc queries \nagainst lots of different tables (per his example) is small. So this \nsentence just doesn't ring true to me:\n\n \" ... by the time weâve read the index leaf block, processed and \n read all the associated table blocks referenced by the index leaf \n block, the chances of there being no subsequent physical activity \n in the index tablespace due to another user session is virtually \n nil. We would still need to re-scan the disk to physically access \n the next index leaf block (or table block) anyways.\"\n\nThat's certainly not true for Postgres servers, and I doubt if it \nis quite that bad on Oracle either.\n\n>> I lean towards using separate tablespaces in Postgres, as the \n>> performance outweighs the additional>> complexity.\n\n> What about his argument with regards to access patterns (i.e.\n> interleaving index and table access during an index scan)? Also,\n> Shaun's advice to have more spindles available sounds convincing to\n> me, too.\n\nI don't buy his arguments. To do so, you'd have to buy a key point:\n\n \"when most physical I/Os in both index and table segments are \n effectively random, single block reads\"\n\nThey are not; hence, the rest of his argument falls apart. Certainly, \nif things were as truly random and uncached as he states, there would \nbe no benefit to separation.\n\nAs far as spindles, yes: like RAM, it's seldom the case to have \ntoo litte :) But as with all things, one should get some benchmarks \non your specific workload before making hardware changes. (Well, RAM \nmay be an exception to that, up to a point).\n\n>> It's down on the tuning list however: much more important\n>> is getting your kernel/volumes configured correctly, allocating\n>> shared_buffers sanely, separating pg_xlog, etc.\n\n> That does make a lot of sense. Separating pg_xlog would probably the\n> first thing I'd do especially since the IO pattern is so dramatically\n> different from tablespace IO access patterns.\n\nYep - moving pg_xlog to something optimized for small, constantly \nwritten files is one of the biggest and easiest wins. Other than \nfsync = off ;)\n\n- -- \nGreg Sabino Mullane [email protected]\nEnd Point Corporation http://www.endpoint.com/\nPGP Key: 0x14964AC8 201205151351\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAk+yl8YACgkQvJuQZxSWSshB+QCghfweMspFIqmP4rLv6/tcGPot\njscAn1SZAP1/KBcu/FEpWXilSnWjlA6Z\n=FX7j\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Tue, 15 May 2012 17:53:57 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Tue, May 15, 2012 at 7:53 PM, Greg Sabino Mullane <[email protected]> wrote:\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: RIPEMD160\n>\n>\n>>>> Is it established practice in the Postgres world to separate indexes\n>>>> from tables? I would assume that the reasoning of Richard Foote -\n>>>> albeit for Oracle databases - is also true for Postgres:\n>>\n>>> Yes, it's an established practice. I'd call it something just short of\n>>> a best practice though, as it really depends on your situation.\n>>\n>> What are the benefits?\n>\n> Disk seeks, basically. Yes, there are a lot of complications regarding\n> all the various hardware/OS/PG level cachings, but at the end of the\n> day, it's less work to have each drive concentrate on a single area\n\nHmm... I see your point. OTOH, the whole purpose of using NAS or SAN\nwith cache, logical volumes and multiple spindles per volume is to\nreduce the impact of slow disk operations like seeks. If in such a\nsituation your database operations are impacted by those seek\noperations then the setup does not seem optimal anyway. Bottom line\nis: with a setup properly tailored to the workload there should be no\nseeks \"visible\" to the database.\n\n> (especially as we always require a heap scan at the moment).\n\nAre you referring to the scan along tuple versions?\nhttp://facility9.com/2011/03/postgresql-row-storage-fundamentals/\n\n>>> I also find his examples a bit contrived, and the whole \"multi-user\"\n>>> argument irrelevant for common cases.\n>>\n>> Why is that?\n>\n> Because most Postgres servers are dedicated to serving the same data\n> or sets of data, and the number of \"other users\" calling ad-hoc queries\n> against lots of different tables (per his example) is small.\n\nI don't see how it should be relevant for this discussion whether\nselects are \"ad hoc\" or other. The mere fact that concurrent accesses\nto the same set of tables and indexes albeit to different data (keys)\nis sufficient to have a potential for seeks - even if disks for index\nand table are separated. And this will typically happen in a\nmultiuser application - even if all users use the same set of queries.\n\n> So this sentence just doesn't ring true to me:\n>\n> \" ... by the time weâve read the index leaf block, processed and\n> read all the associated table blocks referenced by the index leaf\n> block, the chances of there being no subsequent physical activity\n> in the index tablespace due to another user session is virtually\n> nil. We would still need to re-scan the disk to physically access\n> the next index leaf block (or table block) anyways.\"\n>\n> That's certainly not true for Postgres servers, and I doubt if it\n> is quite that bad on Oracle either.\n\nI don't think this has much to do with the brand. Richard just\ndescribes logical consequences of concurrent access (see my attempt at\nexplanation above). Fact remains that concurrent accesses rarely\ntarget for the same data and because of that you would see quite\nerratic access patterns to blocks. How they translate to actual disk\naccesses depends on various caching mechanisms in place and the\nphysical distribution of data across disks (RAID). But I think we\ncannot ignore the fact that the data requested by concurrent queries\nmost likely resides on different blocks.\n\n>>> I lean towards using separate tablespaces in Postgres, as the\n>>> performance outweighs the additional>> complexity.\n>\n>> What about his argument with regards to access patterns (i.e.\n>> interleaving index and table access during an index scan)? Also,\n>> Shaun's advice to have more spindles available sounds convincing to\n>> me, too.\n>\n> I don't buy his arguments. To do so, you'd have to buy a key point:\n>\n> \"when most physical I/Os in both index and table segments are\n> effectively random, single block reads\"\n>\n> They are not; hence, the rest of his argument falls apart. Certainly,\n> if things were as truly random and uncached as he states, there would\n> be no benefit to separation.\n\nYour argument with seeks also only works in absence of caching (see\nabove). I think Richard was mainly pointing out that /in absence of\ncaching/ different blocks need to be accessed here.\n\n> As far as spindles, yes: like RAM, it's seldom the case to have\n> too litte :) But as with all things, one should get some benchmarks\n> on your specific workload before making hardware changes. (Well, RAM\n> may be an exception to that, up to a point).\n\nCan you share some measurement data which backs the thesis that the\ndistribution of index and table to different disks is advantageous?\nThat would be interesting to see. Then one could also balance\nperformance benefits against other effects (manageability etc.) and\nsee on which side the advantage comes out.\n\nEven though I'm not convinced: Thank you for the interesting discussion!\n\nCheers\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 16 May 2012 11:36:04 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On Tue, May 15, 2012 at 11:53 AM, Greg Sabino Mullane <[email protected]>wrote:\n\n> >>> Is it established practice in the Postgres world to separate indexes\n> >>> from tables? I would assume that the reasoning of Richard Foote -\n> >>> albeit for Oracle databases - is also true for Postgres:\n> >\n> >> Yes, it's an established practice. I'd call it something just short of\n> >> a best practice though, as it really depends on your situation.\n> >\n> > What are the benefits?\n>\n> Disk seeks, basically. Yes, there are a lot of complications regarding\n> all the various hardware/OS/PG level cachings, but at the end of the\n> day, it's less work to have each drive concentrate on a single area\n> (especially as we always require a heap scan at the moment).\n>\n\nThanks for sharing your experience, Greg. What would a PG test-case for\nthis look like?\n\n\nCheers,\n\nJan\n\nOn Tue, May 15, 2012 at 11:53 AM, Greg Sabino Mullane <[email protected]> wrote:\n>>> Is it established practice in the Postgres world to separate indexes\n>>> from tables? I would assume that the reasoning of Richard Foote -\n>>> albeit for Oracle databases - is also true for Postgres:\n>\n>> Yes, it's an established practice. I'd call it something just short of\n>> a best practice though, as it really depends on your situation.\n>\n> What are the benefits?\n\nDisk seeks, basically. Yes, there are a lot of complications regarding\nall the various hardware/OS/PG level cachings, but at the end of the\nday, it's less work to have each drive concentrate on a single area\n(especially as we always require a heap scan at the moment).Thanks for sharing your experience, Greg. What would a PG test-case for this look like?\nCheers,Jan",
"msg_date": "Thu, 17 May 2012 11:54:47 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "After seeing less much performance during pg_dump and pg_restore operations\nfrom a 10x15k SAN RAID1+1 XFS mount (\nallocsize=256m,attr2,logbufs=8,logbsize=256k,noatime,nobarrier) than the\nlocal-storage 2x15k RAID1 EXT4 mount, I ran the following test of the\neffect of read-ahead (RA):\n\nfor t in `seq 1 1 10`\ndo\n for drive in `ls /dev/sd[b-z]`\n do\n for ra in 256 512 `seq 1024 1024 70000`\n do\n echo benchmark-test: $drive $ra\n blockdev --setra $ra $drive\n hdparm -t $drive\n hdparm -T $drive\n echo benchmark-test-complete: $drive $ra\n done\n done\ndone\n\nIn this test, the local mount's buffered reads perform best around RA~10k @\n150MB/sec then starts a steady decline. The SAN mount has a similar but\nmore subtle decline with a maximum around RA~5k @ 80MB/sec but with much\ngreater variance. I was surprised at the 80MB/sec for the SAN - I was\nexpecting 150MB/sec - and I'm also surprised at the variance. I understand\nthat there are many more elements involved for the SAN: more drives,\nnetwork overhead & latency, iscsi, etc. but I'm still surprised.\n\nIs this expected behavior for a SAN mount or is this a hint at some\nmisconfiguration? Thoughts?\n\n\nCheers,\n\nJan",
"msg_date": "Sat, 19 May 2012 09:47:50 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "Oops - couple of corrections and clarifications below...\n\nOn Sat, May 19, 2012 at 9:47 AM, Jan Nielsen <[email protected]>wrote:\n\n> After seeing less much performance during pg_dump and pg_restore\n> operations from a 10x15k SAN RAID1+1 XFS mount\n>\n\n10x15k RAID1+0 on a SAN with XFS on /dev/sdc\n\n\n> (allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime,nobarrier) than the\n> local-storage 2x15k RAID1 EXT4 mount,\n>\n\n2x15k RAID1 on local-storage with EXT4 on /dev/sda\n\n\n> I ran the following test of the effect of read-ahead (RA):\n>\n> for t in `seq 1 1 10`\n> do\n> for drive in `ls /dev/sd[b-z]`\n> do\n> for ra in 256 512 `seq 1024 1024 70000`\n> do\n> echo benchmark-test: $drive $ra\n> blockdev --setra $ra $drive\n> hdparm -t $drive\n> hdparm -T $drive\n> echo benchmark-test-complete: $drive $ra\n> done\n> done\n> done\n>\n> In this test, the local mount's buffered reads perform best around RA~10k\n> @ 150MB/sec then starts a steady decline. The SAN mount has a similar but\n> more subtle decline with a maximum around RA~5k @ 80MB/sec but with much\n> greater variance. I was surprised at the 80MB/sec for the SAN - I was\n> expecting 150MB/sec - and I'm also surprised at the variance. I understand\n> that there are many more elements involved for the SAN: more drives,\n> network overhead & latency, iscsi, etc. but I'm still surprised.\n>\n> Is this expected behavior for a SAN mount or is this a hint at some\n> misconfiguration? Thoughts?\n>\n\nIs this variance, as contrasted to the local-storage drive, and drop in\nperformance in relation to the local-storage typical of SAN?\n\n\n>\n>\n> Cheers,\n>\n> Jan\n>\n>\n\nOops - couple of corrections and clarifications below...On Sat, May 19, 2012 at 9:47 AM, Jan Nielsen <[email protected]> wrote:\nAfter seeing less much performance during pg_dump and pg_restore operations from a 10x15k SAN RAID1+1 XFS mount \n10x15k RAID1+0 on a SAN with XFS on /dev/sdc (allocsize=256m,attr2,logbufs=8,logbsize=256k,noatime,nobarrier) than the local-storage 2x15k RAID1 EXT4 mount, \n2x15k RAID1 on local-storage with EXT4 on /dev/sda I ran the following test of the effect of read-ahead (RA):\nfor t in `seq 1 1 10`do for drive in `ls /dev/sd[b-z]` do for ra in 256 512 `seq 1024 1024 70000` do echo benchmark-test: $drive $ra\n\n blockdev --setra $ra $drive hdparm -t $drive hdparm -T $drive echo benchmark-test-complete: $drive $ra done donedoneIn this test, the local mount's buffered reads perform best around RA~10k @ 150MB/sec then starts a steady decline. The SAN mount has a similar but more subtle decline with a maximum around RA~5k @ 80MB/sec but with much greater variance. I was surprised at the 80MB/sec for the SAN - I was expecting 150MB/sec - and I'm also surprised at the variance. I understand that there are many more elements involved for the SAN: more drives, network overhead & latency, iscsi, etc. but I'm still surprised.\nIs this expected behavior for a SAN mount or is this a hint at some misconfiguration? Thoughts?\nIs this variance, as contrasted to the local-storage drive, and drop in performance in relation to the local-storage typical of SAN? \n\nCheers,Jan",
"msg_date": "Sat, 19 May 2012 13:11:49 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Recommendations"
},
{
"msg_contents": "On 20/05/12 03:47, Jan Nielsen wrote:\n> In this test, the local mount's buffered reads perform best around RA~10k @\n> 150MB/sec then starts a steady decline. The SAN mount has a similar but\n> more subtle decline with a maximum around RA~5k @ 80MB/sec but with much\n> greater variance. I was surprised at the 80MB/sec for the SAN - I was\n> expecting 150MB/sec - and I'm also surprised at the variance. I understand\n> that there are many more elements involved for the SAN: more drives,\n> network overhead& latency, iscsi, etc. but I'm still surprised.\n>\n> Is this expected behavior for a SAN mount or is this a hint at some\n> misconfiguration? Thoughts?\n>\n>\n\nIs the SAN mount via iSCSI? If so and also if the connection is a \nsingle 1Gbit interface then 80MB/s is reasonable. You might get closer \nto 100MB/s by tweaking things like MTU for the interface concerned, but \nto get more performance either bonding several 1Gbit links or using \n10Gbit is required.\n\nRegards\n\nMark\n",
"msg_date": "Thu, 24 May 2012 16:59:44 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Recommendations"
}
] |
[
{
"msg_contents": "Dear list,\n\nWe have a database schema, which looks the same as the attached script.\n\nWhen filling the tables with data, and skipping analyze on the table (so \npg_stats contains no records for table 'a'), the first select in the \nscript runs fast, but after an analyze the planner decides to sequence \nscan tables b and c, thus making the query much slower. Can somebody help \nme solving this issue, or tuning our installation to not to use sequence \nscans in this case?\n\nThanks in advance,\n\n\nKojedzinszky Richard\nEuronet Magyarorszag Informatikai Zrt.",
"msg_date": "Tue, 24 Apr 2012 10:11:43 +0200 (CEST)",
"msg_from": "Richard Kojedzinszky <[email protected]>",
"msg_from_op": true,
"msg_subject": "query optimization"
},
{
"msg_contents": "Richard Kojedzinszky <[email protected]> wrote:\n \n> tuning our installation to not to use sequence scans in this case?\n \nMake sure effective_cache_size is set to the sum of shared_buffers\nand whatever your OS shows as usable for caching. Try adjusting\ncost factors: maybe random_page_cost between 1 and 2, and\ncpu_tuple_cost between 0.03 and 0.05.\n \n-Kevin\n",
"msg_date": "Thu, 26 Apr 2012 13:23:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization"
},
{
"msg_contents": "Richard Kojedzinszky <[email protected]> writes:\n> Dear list,\n> We have a database schema, which looks the same as the attached script.\n\n> When filling the tables with data, and skipping analyze on the table (so \n> pg_stats contains no records for table 'a'), the first select in the \n> script runs fast, but after an analyze the planner decides to sequence \n> scan tables b and c, thus making the query much slower. Can somebody help \n> me solving this issue, or tuning our installation to not to use sequence \n> scans in this case?\n\nUm ... did you analyze all the tables, or just some of them? I get\nsub-millisecond runtimes if all four tables have been analyzed, but it\ndoes seem to pick lousy plans if, say, only a and b have been analyzed.\n\nWhat you really need for this query structure is the parameterized-path\nwork I've been doing for 9.2; but at least on the exact example given,\nI'm not seeing that 9.1 is that much worse.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2012 15:17:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization "
},
{
"msg_contents": "Tom Lane wrote on 26.04.2012 21:17:\n> Richard Kojedzinszky<[email protected]> writes:\n>> Dear list,\n>> We have a database schema, which looks the same as the attached script.\n>\n>> When filling the tables with data, and skipping analyze on the table (so\n>> pg_stats contains no records for table 'a'), the first select in the\n>> script runs fast, but after an analyze the planner decides to sequence\n>> scan tables b and c, thus making the query much slower. Can somebody help\n>> me solving this issue, or tuning our installation to not to use sequence\n>> scans in this case?\n>\n> Um ... did you analyze all the tables, or just some of them? I get\n> sub-millisecond runtimes if all four tables have been analyzed, but it\n> does seem to pick lousy plans if, say, only a and b have been analyzed.\n>\n\nHere it's similar to Richard's experience:\n\nBefore analyzing the four tables, the first statement yields this plan:\n\nMerge Left Join (cost=504.89..2634509.91 rows=125000000 width=16) (actual time=0.103..0.108 rows=1 loops=1)\n Merge Cond: (a.b = b.id)\n -> Sort (cost=504.89..506.14 rows=500 width=8) (actual time=0.043..0.043 rows=1 loops=1)\n Sort Key: a.b\n Sort Method: quicksort Memory: 17kB\n -> Bitmap Heap Scan on a (cost=12.14..482.47 rows=500 width=8) (actual time=0.028..0.029 rows=1 loops=1)\n Recheck Cond: (id = 4)\n -> Bitmap Index Scan on a_idx1 (cost=0.00..12.01 rows=500 width=0) (actual time=0.021..0.021 rows=1 loops=1)\n Index Cond: (id = 4)\n -> Materialize (cost=0.00..884002.52 rows=50000000 width=8) (actual time=0.041..0.057 rows=5 loops=1)\n -> Merge Join (cost=0.00..759002.52 rows=50000000 width=8) (actual time=0.037..0.051 rows=5 loops=1)\n Merge Cond: (b.id = c.id)\n -> Index Scan using b_idx1 on b (cost=0.00..4376.26 rows=100000 width=4) (actual time=0.016..0.018 rows=5 loops=1)\n -> Materialize (cost=0.00..4626.26 rows=100000 width=4) (actual time=0.017..0.022 rows=5 loops=1)\n -> Index Scan using c_idx1 on c (cost=0.00..4376.26 rows=100000 width=4) (actual time=0.014..0.017 rows=5 loops=1)\nTotal runtime: 0.209 ms\n\nThis continues to stay the plan for about 10-15 repetitions, then it turns to this plan\n\nHash Right Join (cost=2701.29..6519.30 rows=1 width=16) (actual time=79.604..299.227 rows=1 loops=1)\n Hash Cond: (b.id = a.b)\n -> Hash Join (cost=2693.00..6136.00 rows=100000 width=8) (actual time=79.550..265.251 rows=100000 loops=1)\n Hash Cond: (b.id = c.id)\n -> Seq Scan on b (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.011..36.158 rows=100000 loops=1)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=4) (actual time=79.461..79.461 rows=100000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 2735kB\n -> Seq Scan on c (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.010..32.930 rows=100000 loops=1)\n -> Hash (cost=8.28..8.28 rows=1 width=8) (actual time=0.015..0.015 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using a_idx1 on a (cost=0.00..8.28 rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1)\n Index Cond: (id = 4)\nTotal runtime: 299.564 ms\n\n(I guess autovacuum kicked in, because that the same plan I get when running analyze on all four tables right after populating them)\n\nAnd the second one yields this one here (Regardless of analyze or not):\n\nQUERY PLAN\nNested Loop Left Join (cost=0.00..16.89 rows=1 width=16) (actual time=0.027..0.031 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..16.57 rows=1 width=12) (actual time=0.020..0.022 rows=1 loops=1)\n -> Index Scan using a_idx1 on a (cost=0.00..8.28 rows=1 width=8) (actual time=0.011..0.012 rows=1 loops=1)\n Index Cond: (id = 4)\n -> Index Scan using b_idx1 on b (cost=0.00..8.28 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)\n Index Cond: (a.b = id)\n -> Index Scan using c_idx1 on c (cost=0.00..0.31 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)\n Index Cond: (b.id = id)\nTotal runtime: 0.104 ms\n\n\nMy version:\nPostgreSQL 9.1.3, compiled by Visual C++ build 1500, 32-bit\nRunning on Windows XP SP3\n\nshared_buffers = 768MB\nwork_mem = 24MB\t\neffective_cache_size = 1024MB\n\nAll other (relevant) settings are on defaults\n\nRegards\nThomas\n\n\n\n\n",
"msg_date": "Thu, 26 Apr 2012 21:38:11 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> Tom Lane wrote on 26.04.2012 21:17:\n>> Um ... did you analyze all the tables, or just some of them? I get\n>> sub-millisecond runtimes if all four tables have been analyzed, but it\n>> does seem to pick lousy plans if, say, only a and b have been analyzed.\n\n> Here it's similar to Richard's experience:\n> Before analyzing the four tables, the first statement yields this plan:\n> [ merge joins ]\n> This continues to stay the plan for about 10-15 repetitions, then it turns to this plan\n> [ hash joins ]\n\nHmm. I see it liking the merge-join plan (with minor variations) with\nor without analyze data, but if just some of the tables have been\nanalyzed, it goes for the hash plan which is a good deal slower. The\ncost estimates aren't that far apart though. In any case, the only\nreason the merge join is so fast is that the data is perfectly ordered\nin each table; on a less contrived example, it could well be a lot\nslower.\n\n> And the second one yields this one here (Regardless of analyze or not):\n\nYeah, the trick there is that it's valid to re-order the joins, since\nthey're both left joins.\n\nIn git HEAD I get something like this:\n\nregression=# explain analyze select * from a left join (b inner join c on b.id = c.id) on a.b = b.id where a.id = 4;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..17.18 rows=1 width=16) (actual time=0.024..0.026 rows=1 loops=1)\n -> Index Scan using a_idx1 on a (cost=0.00..8.38 rows=1 width=8) (actual time=0.010..0.010 rows=1 loops=1)\n Index Cond: (id = 4)\n -> Nested Loop (cost=0.00..8.80 rows=1 width=8) (actual time=0.011..0.012 rows=1 loops=1)\n -> Index Only Scan using b_idx1 on b (cost=0.00..8.38 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=1)\n Index Cond: (id = a.b)\n Heap Fetches: 1\n -> Index Only Scan using c_idx1 on c (cost=0.00..0.41 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)\n Index Cond: (id = b.id)\n Heap Fetches: 1\n Total runtime: 0.080 ms\n(11 rows)\n\nbut 9.1 and older are not smart enough to do it like that when they\ncan't re-order the joins.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2012 16:08:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization "
},
{
"msg_contents": "\n\nOn 04/26/2012 04:08 PM, Tom Lane wrote:\n> Thomas Kellerer<[email protected]> writes:\n>> Tom Lane wrote on 26.04.2012 21:17:\n>>> Um ... did you analyze all the tables, or just some of them? I get\n>>> sub-millisecond runtimes if all four tables have been analyzed, but it\n>>> does seem to pick lousy plans if, say, only a and b have been analyzed.\n>> Here it's similar to Richard's experience:\n>> Before analyzing the four tables, the first statement yields this plan:\n>> [ merge joins ]\n>> This continues to stay the plan for about 10-15 repetitions, then it turns to this plan\n>> [ hash joins ]\n> Hmm. I see it liking the merge-join plan (with minor variations) with\n> or without analyze data, but if just some of the tables have been\n> analyzed, it goes for the hash plan which is a good deal slower. The\n> cost estimates aren't that far apart though. In any case, the only\n> reason the merge join is so fast is that the data is perfectly ordered\n> in each table; on a less contrived example, it could well be a lot\n> slower.\n>\n\nIt's not so terribly contrived, is it? It's common enough to have tables \nwhich are append-only and to join them by something that corresponds to \nthe append order (serial field, timestamp etc.)\n\ncheers\n\nandrew\n",
"msg_date": "Thu, 26 Apr 2012 16:27:38 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization"
},
{
"msg_contents": "Dear Tom,\n\nThanks for your response. I my test cases, until an analyze have been run, \nthe queries run fast. After only a have been analyzed, the query plan \nchanges so that a sequence scan for b and c tables is done, and joining \nthem with 'a' is done within memory.\n\nSo, my tests are here, with autovacuum turned off:\n\nkrichy=> \\i test.sql\nDROP TABLE\nDROP TABLE\nDROP TABLE\nDROP TABLE\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\nINSERT 0 100000\nINSERT 0 100000\nINSERT 0 100000\nINSERT 0 100000\nCREATE INDEX\nCREATE INDEX\nCREATE INDEX\nCREATE INDEX\nkrichy=> explain analyze select * from a left join (b inner join c on b.id \n= c.id) on a.b = b.id where a.id = 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=504.88..2633669.90 rows=125000000 width=16) \n(actual time=0.183..0.189 rows=1 loops=1)\n Merge Cond: (a.b = b.id)\n -> Sort (cost=504.88..506.13 rows=500 width=8) (actual \ntime=0.073..0.074 rows=1 loops=1)\n Sort Key: a.b\n Sort Method: quicksort Memory: 17kB\n -> Bitmap Heap Scan on a (cost=12.13..482.47 rows=500 width=8) \n(actual time=0.048..0.048 rows=1 loops=1)\n Recheck Cond: (id = 1)\n -> Bitmap Index Scan on a_idx1 (cost=0.00..12.01 rows=500 \nwidth=0) (actual time=0.044..0.044 rows=1 loops=1)\n Index Cond: (id = 1)\n -> Materialize (cost=0.00..883162.52 rows=50000000 width=8) (actual \ntime=0.103..0.108 rows=2 loops=1)\n -> Merge Join (cost=0.00..758162.52 rows=50000000 width=8) \n(actual time=0.100..0.104 rows=2 loops=1)\n Merge Cond: (b.id = c.id)\n -> Index Scan using b_idx1 on b (cost=0.00..3956.26 \nrows=100000 width=4) (actual time=0.050..0.050 rows=2 loops=1)\n -> Materialize (cost=0.00..4206.26 rows=100000 width=4) \n(actual time=0.048..0.049 rows=2 loops=1)\n -> Index Scan using c_idx1 on c (cost=0.00..3956.26 \nrows=100000 width=4) (actual time=0.046..0.047 rows=2 loops=1)\n Total runtime: 0.276 ms\n(16 rows)\n\nkrichy=> ANALYZE a;\nANALYZE\nkrichy=> explain analyze select * from a left join (b inner join c on b.id \n= c.id) on a.b = b.id where a.id = 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=8.29..883178.31 rows=500 width=16) (actual \ntime=0.050..0.056 rows=1 loops=1)\n Merge Cond: (b.id = a.b)\n -> Merge Join (cost=0.00..758162.52 rows=50000000 width=8) (actual \ntime=0.034..0.038 rows=2 loops=1)\n Merge Cond: (b.id = c.id)\n -> Index Scan using b_idx1 on b (cost=0.00..3956.26 rows=100000 \nwidth=4) (actual time=0.015..0.016 rows=2 loops=1)\n -> Materialize (cost=0.00..4206.26 rows=100000 width=4) (actual \ntime=0.015..0.017 rows=2 loops=1)\n -> Index Scan using c_idx1 on c (cost=0.00..3956.26 \nrows=100000 width=4) (actual time=0.012..0.013 rows=2 loops=1)\n -> Materialize (cost=8.29..8.29 rows=1 width=8) (actual \ntime=0.015..0.016 rows=1 loops=1)\n -> Sort (cost=8.29..8.29 rows=1 width=8) (actual \ntime=0.013..0.013 rows=1 loops=1)\n Sort Key: a.b\n Sort Method: quicksort Memory: 17kB\n -> Index Scan using a_idx1 on a (cost=0.00..8.28 rows=1 \nwidth=8) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 0.101 ms\n(14 rows)\n\nkrichy=> ANALYZE b;\nANALYZE\nkrichy=> explain analyze select * from a left join (b inner join c on b.id \n= c.id) on a.b = b.id where a.id = 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=2651.29..6419.30 rows=1 width=16) (actual \ntime=83.823..257.890 rows=1 loops=1)\n Hash Cond: (b.id = a.b)\n -> Hash Join (cost=2643.00..6036.00 rows=100000 width=8) (actual \ntime=83.790..224.552 rows=100000 loops=1)\n Hash Cond: (c.id = b.id)\n -> Seq Scan on c (cost=0.00..1393.00 rows=100000 width=4) \n(actual time=0.010..35.925 rows=100000 loops=1)\n -> Hash (cost=1393.00..1393.00 rows=100000 width=4) (actual \ntime=83.752..83.752 rows=100000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 2344kB\n -> Seq Scan on b (cost=0.00..1393.00 rows=100000 width=4) \n(actual time=0.009..36.302 rows=100000 loops=1)\n -> Hash (cost=8.28..8.28 rows=1 width=8) (actual time=0.012..0.012 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using a_idx1 on a (cost=0.00..8.28 rows=1 \nwidth=8) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 258.245 ms\n(13 rows)\n\nkrichy=> ANALYZE c;\nANALYZE\nkrichy=> explain analyze select * from a left join (b inner join c on b.id \n= c.id) on a.b = b.id where a.id = 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=2651.29..6419.30 rows=1 width=16) (actual \ntime=83.295..255.653 rows=1 loops=1)\n Hash Cond: (b.id = a.b)\n -> Hash Join (cost=2643.00..6036.00 rows=100000 width=8) (actual \ntime=83.275..222.373 rows=100000 loops=1)\n Hash Cond: (b.id = c.id)\n -> Seq Scan on b (cost=0.00..1393.00 rows=100000 width=4) \n(actual time=0.010..35.726 rows=100000 loops=1)\n -> Hash (cost=1393.00..1393.00 rows=100000 width=4) (actual \ntime=83.238..83.238 rows=100000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 2344kB\n -> Seq Scan on c (cost=0.00..1393.00 rows=100000 width=4) \n(actual time=0.009..36.243 rows=100000 loops=1)\n -> Hash (cost=8.28..8.28 rows=1 width=8) (actual time=0.011..0.011 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using a_idx1 on a (cost=0.00..8.28 rows=1 \nwidth=8) (actual time=0.008..0.009 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 256.008 ms\n(13 rows)\n\nSo after analyzing all the tables, the result changed much, psql uses \nother plans to do the query, and in my case it is effectively much slower.\n\nMy configuration file has:\nwork_mem=16MB\n\nwith this removed, the query goes fast again, but I dont know why the more \nmemory makes psql chose a worse plan.\n\nThanks in advance,\n\nKojedzinszky Richard\nEuronet Magyarorszag Informatikai Zrt.\n\nOn Thu, 26 Apr 2012, Tom Lane wrote:\n\n> Date: Thu, 26 Apr 2012 15:17:18 -0400\n> From: Tom Lane <[email protected]>\n> To: Richard Kojedzinszky <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [PERFORM] query optimization \n> \n> Richard Kojedzinszky <[email protected]> writes:\n>> Dear list,\n>> We have a database schema, which looks the same as the attached script.\n>\n>> When filling the tables with data, and skipping analyze on the table (so\n>> pg_stats contains no records for table 'a'), the first select in the\n>> script runs fast, but after an analyze the planner decides to sequence\n>> scan tables b and c, thus making the query much slower. Can somebody help\n>> me solving this issue, or tuning our installation to not to use sequence\n>> scans in this case?\n>\n> Um ... did you analyze all the tables, or just some of them? I get\n> sub-millisecond runtimes if all four tables have been analyzed, but it\n> does seem to pick lousy plans if, say, only a and b have been analyzed.\n>\n> What you really need for this query structure is the parameterized-path\n> work I've been doing for 9.2; but at least on the exact example given,\n> I'm not seeing that 9.1 is that much worse.\n>\n> \t\t\tregards, tom lane\n>\n",
"msg_date": "Sat, 28 Apr 2012 08:11:25 +0200 (CEST)",
"msg_from": "Richard Kojedzinszky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization "
}
] |
[
{
"msg_contents": "Hi all:\nCan someone please guide me as to how to solve this problem? If this is the wrong forum, please let me know which one to post this one in. I am new to Postgres (about 3 months into it)\n\nI have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program that does some computation. It takes in a date range and for one pair of personnel (two employees in a company) it calculates some values over the time period. It takes about 40ms (milli seconds) to complete and give me the answer. All good so far.\n\nNow I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.\n\nSo (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.\n\nThanks, Venki\nHi all:Can someone please guide me as to how to solve this problem? If this is the wrong forum, please let me know which one to post this one in. I am new to Postgres (about 3 months into it)I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program that does some computation. It takes in a date range and for one pair of personnel (two employees in a company) it calculates some values over the time period. It takes about 40ms (milli seconds) to complete and give me the answer. All good so far.Now I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No\n user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.Thanks, Venki",
"msg_date": "Wed, 25 Apr 2012 11:52:03 -0700 (PDT)",
"msg_from": "Venki Ramachandran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "On Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran <\[email protected]> wrote:\n\n> Hi all:\n> Can someone please guide me as to how to solve this problem? If this is\n> the wrong forum, please let me know which one to post this one in. I am new\n> to Postgres (about 3 months into it)\n>\n> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql\n> program that does some computation. It takes in a date range and for one\n> pair of personnel (two employees in a company) it calculates some values\n> over the time period. It takes about 40ms (milli seconds) to complete and\n> give me the answer. All good so far.\n>\n> Now I have to run the same pgplsql on all possible combinations of\n> employees and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n>\n\nThe question is, how much of that 40ms is spent performing the calculation,\nhow much is spent querying, and how much is function call overhead, and how\nmuch is round trip between the client and server with the query and\nresults? Depending upon the breakdown, it is entirely possible that the\nactual per-record multiplier can be kept down to a couple of milliseconds\nif you restructure things to query data in bulk and only call a single\nfunction to do the work. If you get it down to 4ms, that's a 20 minute\nquery. Get it down to 1ms and you're looking at only 5 minutes for what\nwould appear to be a fairly compute-intensive report over a relatively\nlarge dataset.\n\nOn Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran <[email protected]> wrote:\nHi all:Can someone please guide me as to how to solve this problem? If this is the wrong forum, please let me know which one to post this one in. I am new to Postgres (about 3 months into it)\nI have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program that does some computation. It takes in a date range and for one pair of personnel (two employees in a company) it calculates some values over the time period. It takes about 40ms (milli seconds) to complete and give me the answer. All good so far.\nNow I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No\n user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.\nThe question is, how much of that 40ms is spent performing the calculation, how much is spent querying, and how much is function call overhead, and how much is round trip between the client and server with the query and results? Depending upon the breakdown, it is entirely possible that the actual per-record multiplier can be kept down to a couple of milliseconds if you restructure things to query data in bulk and only call a single function to do the work. If you get it down to 4ms, that's a 20 minute query. Get it down to 1ms and you're looking at only 5 minutes for what would appear to be a fairly compute-intensive report over a relatively large dataset.",
"msg_date": "Wed, 25 Apr 2012 12:36:25 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Venki Ramachandran <[email protected]> wrote:\n \n> I have PostGres 9.0 database in a AWS server (x-large) and a\n> pgplsql program that does some computation. It takes in a date\n> range and for one pair of personnel (two employees in a company)\n> it calculates some values over the time period. It takes about\n> 40ms (milli seconds) to complete and give me the answer. All good\n> so far.\n \nMaybe; maybe not. If you wrote out *how to do it* in your code, it\nprobably won't scale well. The trick to scaling is to write\n*declaratively*: say *what you want* rather than *how to get it*.\n \nAggregates, window functions, CTEs, and/or the generate_series()\nfunction may be useful. It's hard to give more specific advice\nwithout more detail about the problem.\n \n-Kevin\n",
"msg_date": "Wed, 25 Apr 2012 15:04:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Hello\n\n2012/4/25 Venki Ramachandran <[email protected]>:\n> Hi all:\n> Can someone please guide me as to how to solve this problem? If this is the\n> wrong forum, please let me know which one to post this one in. I am new to\n> Postgres (about 3 months into it)\n>\n> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program\n> that does some computation. It takes in a date range and for one pair of\n> personnel (two employees in a company) it calculates some values over the\n> time period. It takes about 40ms (milli seconds) to complete and give me the\n> answer. All good so far.\n>\n> Now I have to run the same pgplsql on all possible combinations of employees\n> and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n\nno, PostgreSQL doesn't support parallel processing of one query. You\ncan use some hardcore tricks and implement cooperative functions in C\n- but this is hard work for beginner. The most simple solution is\nparallelism on application level.\n\nRegards\n\nPavel Stehule\n\n>\n> Thanks, Venki\n",
"msg_date": "Wed, 25 Apr 2012 22:09:26 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Another question (probably a silly mistake) while debugging this problem:\nI put in insert statements into the pgplsql code to collect the current_timestamp and see where the code was spending most of it time....The output is as follows:\n\n log_text | insertdatetime\n-------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------\n INPUT VARIABLES,Src,emp_id_1,emp_id_100,StartDate,2012-01-01,EndDate,2012-02-01,Tou,DOnPk,CRR Type,ptp-obligations,MinPositivePirce,1.00 | 2012-04-25 20:56:42.691965+00\n Starting get_DAM_Value function | 2012-04-25 20:56:42.691965+00\n StartDate,2012-01-01 01:00:00,EndDate,2012-02-02 00:00:00 | 2012-04-25 20:56:42.691965+00\n StartHr,vSrcprice,vSinkprice,vDiff,vAvgValue,vTotalDAMValue,vPkTotal,vOffPkTotal | 2012-04-25 20:56:42.691965+00\n 2012-01-01 01:00:00,18.52,16.15,-2.370000,-2.370000,0.000000,0.000000,-2.370000 | 2012-04-25 20:56:42.691965+00\n 2012-01-01 02:00:00,17.22,15.60,-1.620000,-1.620000,0.000000,0.000000,-3.990000 | 2012-04-25 20:56:42.691965+00\n 2012-01-01 03:00:00,18.22,17.55,-0.670000,-0.670000,0.000000,0.000000,-4.660000 | 2012-04-25 20:56:42.691965+00\n 2012-01-01 04:00:00,18.53,18.13,-0.400000,-0.400000,0.000000,0.000000,-5.060000 | 2012-04-25 20:56:42.691965+00\n ...\n ...\n ...\n ...\n 2012-02-02 00:00:00,2,17.13,17.13,0.000000,0.000000,0.000000,-7940.250000,-5216.290000 | 2012-04-25 20:56:42.691965+00\n 2012-02-02 00:00:00,3,16.54,16.54,0.000000,0.000000,0.000000,-7940.250000,-5216.290000 | 2012-04-25 20:56:42.691965+00\n 2012-02-02 00:00:00,4,16.27,16.27,0.000000,0.000000,0.000000,-7940.250000,-5216.290000 | 2012-04-25 20:56:42.691965+00\n TotalPEAKMVal,-7940.250000,NumIntervals,2034,AvgRTMVAlue,-3.903761 | 2012-04-25 20:56:42.691965+00\n OUTPUT VARIABLES,AvgPurValue,-2.84,AvgSaleValue,-3.90,Profit,-1.06,ProfitPercentage,-106.00 | 2012-04-25 20:56:42.691965+00\n(3832 rows)\n\nWhy doesn't the current_timestamp value change within the pgpsql code? For every select from a table to compute, I am inserting into my debug_log dummy table. For all 3832 rows I have the same current_timestamp value which I was hoping to get by the following insert statement:\ninsert into debug_log ('some log text', current_timestamp);\n\nBut when I run this procedure from the psql command line, it takes 11 seconds....I gave a bigger date range than what I stated as (40 ms) in my first post hence the change to 11 seconds.\n\ndev=> select calc_comp('emp_id_1','emp_id_100','DAM-RTM',to_date('2012-01-01','yyyy-mm-dd'),to_date('2012-02-01','yyyy-mm-dd'),'DOnPk','ptp-obligations',1.00, 0);\n crr_valuation \n-------------------------------\n 0||-2.84|-3.90|-1.06|-106.00|\n(1 row)\n\nTime: 11685.828 ms.\n\nThe last input value 1/0 while calling the above function controls, if any log line messages should be inserted or not, 0=insert, 1 = do_not_insert. When I toggle the flag the overall timing did not change. Does it not some time in ms to write 3832 rows to a table?\nWhy is my current_timestamp value not changing. I was expecting the difference between the last row's timestamp value MINUS the first row's tiemstamp value to equal my 11.685 seconds. What is wrong here?\n\n\n-Venki\n\n\n________________________________\n From: Samuel Gendler <[email protected]>\nTo: Venki Ramachandran <[email protected]> \nCc: \"[email protected]\" <[email protected]> \nSent: Wednesday, April 25, 2012 12:36 PM\nSubject: Re: [PERFORM] Parallel Scaling of a pgplsql problem\n \n\n\n\n\nOn Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran <[email protected]> wrote:\n\nHi all:\n>Can someone please guide me as to how to solve this problem? If this is the wrong forum, please let me know which one to post this one in. I am new to Postgres (about 3 months into it)\n>\n>\n>I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program that does some computation. It takes in a date range and for one pair of personnel (two employees in a company) it calculates some values over the time period. It takes about 40ms (milli seconds) to complete and give me the answer. All good so far.\n>\n>\n>Now I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.\n>\n>\n>So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.\n\nThe question is, how much of that 40ms is spent performing the calculation, how much is spent querying, and how much is function call overhead, and how much is round trip between the client and server with the query and results? Depending upon the breakdown, it is entirely possible that the actual per-record multiplier can be kept down to a couple of milliseconds if you restructure things to query data in bulk and only call a single function to do the work. If you get it down to 4ms, that's a 20 minute query. Get it down to 1ms and you're looking at only 5 minutes for what would appear to be a fairly compute-intensive report over a relatively large dataset.\nAnother question (probably a silly mistake) while debugging this problem:I put in insert statements into the pgplsql code to collect the current_timestamp and see where the code was spending most of it time....The output is as follows: log_text \n | insertdatetime-------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------- INPUT VARIABLES,Src,emp_id_1,emp_id_100,StartDate,2012-01-01,EndDate,2012-02-01,Tou,DOnPk,CRR Type,ptp-obligations,MinPositivePirce,1.00 | 2012-04-25 20:56:42.691965+00 Starting get_DAM_Value function \n | 2012-04-25 20:56:42.691965+00 StartDate,2012-01-01 01:00:00,EndDate,2012-02-02 00:00:00 | 2012-04-25 20:56:42.691965+00 StartHr,vSrcprice,vSinkprice,vDiff,vAvgValue,vTotalDAMValue,vPkTotal,vOffPkTotal \n | 2012-04-25 20:56:42.691965+00 2012-01-01 01:00:00,18.52,16.15,-2.370000,-2.370000,0.000000,0.000000,-2.370000 | 2012-04-25 20:56:42.691965+00 2012-01-01 02:00:00,17.22,15.60,-1.620000,-1.620000,0.000000,0.000000,-3.990000 | 2012-04-25 20:56:42.691965+00 2012-01-01 03:00:00,18.22,17.55,-0.670000,-0.670000,0.000000,0.000000,-4.660000 \n | 2012-04-25 20:56:42.691965+00 2012-01-01 04:00:00,18.53,18.13,-0.400000,-0.400000,0.000000,0.000000,-5.060000 | 2012-04-25 20:56:42.691965+00 ... ... ... ... 2012-02-02 00:00:00,2,17.13,17.13,0.000000,0.000000,0.000000,-7940.250000,-5216.290000 | 2012-04-25\n 20:56:42.691965+00 2012-02-02 00:00:00,3,16.54,16.54,0.000000,0.000000,0.000000,-7940.250000,-5216.290000 | 2012-04-25 20:56:42.691965+00 2012-02-02 00:00:00,4,16.27,16.27,0.000000,0.000000,0.000000,-7940.250000,-5216.290000 | 2012-04-25 20:56:42.691965+00 TotalPEAKMVal,-7940.250000,NumIntervals,2034,AvgRTMVAlue,-3.903761 \n | 2012-04-25 20:56:42.691965+00 OUTPUT VARIABLES,AvgPurValue,-2.84,AvgSaleValue,-3.90,Profit,-1.06,ProfitPercentage,-106.00 | 2012-04-25 20:56:42.691965+00(3832 rows)Why doesn't the current_timestamp value change within the pgpsql code? For every select from a table to compute, I am inserting into my debug_log dummy table. For all 3832 rows I have the same current_timestamp value which I was hoping to get by the following\n insert statement:insert into debug_log ('some log text', current_timestamp);But when I run this procedure from the psql command line, it takes 11 seconds....I gave a bigger date range than what I stated as (40 ms) in my first post hence the change to 11 seconds.dev=> select calc_comp('emp_id_1','emp_id_100','DAM-RTM',to_date('2012-01-01','yyyy-mm-dd'),to_date('2012-02-01','yyyy-mm-dd'),'DOnPk','ptp-obligations',1.00, 0); crr_valuation ------------------------------- 0||-2.84|-3.90|-1.06|-106.00|(1 row)Time: 11685.828 ms.The last input value 1/0 while calling the above function controls, if any log line messages\n should be inserted or not, 0=insert, 1 = do_not_insert. When I toggle the flag the overall timing did not change. Does it not some time in ms to write 3832 rows to a table?Why is my current_timestamp value not changing. I was expecting the difference between the last row's timestamp value MINUS the first row's tiemstamp value to equal my 11.685 seconds. What is wrong here?-Venki From: Samuel Gendler <[email protected]> To: Venki Ramachandran\n <[email protected]> Cc: \"[email protected]\" <[email protected]> Sent: Wednesday, April 25, 2012 12:36 PM Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem \nOn Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran <[email protected]> wrote:\nHi all:Can someone please guide me as to how to solve this problem? If this is the wrong forum, please let me know which one to post this one in. I am new to Postgres (about 3 months into it)\nI have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program that does some computation. It takes in a date range and for one pair of personnel (two employees in a company) it calculates some values over the time period. It takes about 40ms (milli seconds) to complete and give me the answer. All good so far.\nNow I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No\n user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.\nThe question is, how much of that 40ms is spent performing the calculation, how much is spent querying, and how much is function call overhead, and how much is round trip between the client and server with the query and results? Depending upon the breakdown, it is entirely possible that the actual per-record multiplier can be kept down to a couple of milliseconds if you restructure things to query data in bulk and only call a single function to do the work. If you get it down to 4ms, that's a 20 minute query. Get it down to 1ms and you're looking at only 5 minutes for what would appear to be a fairly compute-intensive report over a relatively large dataset.",
"msg_date": "Wed, 25 Apr 2012 14:12:03 -0700 (PDT)",
"msg_from": "Venki Ramachandran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Hello\n\nhttp://www.postgresql.org/docs/8.1/static/functions-datetime.html\n\nCURRENT_TIME and CURRENT_TIMESTAMP deliver values with time zone;\nLOCALTIME and LOCALTIMESTAMP deliver values without time zone.\n\nCURRENT_TIME, CURRENT_TIMESTAMP, LOCALTIME, and LOCALTIMESTAMP can\noptionally take a precision parameter, which causes the result to be\nrounded to that many fractional digits in the seconds field. Without a\nprecision parameter, the result is given to the full available\nprecision.\n\nSome examples:\n\nSELECT CURRENT_TIME;\nResult: 14:39:53.662522-05\n\nSELECT CURRENT_DATE;\nResult: 2001-12-23\n\nSELECT CURRENT_TIMESTAMP;\nResult: 2001-12-23 14:39:53.662522-05\n\nSELECT CURRENT_TIMESTAMP(2);\nResult: 2001-12-23 14:39:53.66-05\n\nSELECT LOCALTIMESTAMP;\nResult: 2001-12-23 14:39:53.662522\n\nSince these functions return the start time of the current\ntransaction, their values do not change during the transaction. This\nis considered a feature: the intent is to allow a single transaction\nto have a consistent notion of the \"current\" time, so that multiple\nmodifications within the same transaction bear the same time stamp.\n\n Note: Other database systems might advance these values more frequently.\n\nPostgreSQL also provides functions that return the start time of the\ncurrent statement, as well as the actual current time at the instant\nthe function is called. The complete list of non-SQL-standard time\nfunctions is:\n\ntransaction_timestamp()\nstatement_timestamp()\n\nRegards\n\nPavel Stehule\n\n2012/4/25 Venki Ramachandran <[email protected]>:\n> Another question (probably a silly mistake) while debugging this problem:\n> I put in insert statements into the pgplsql code to collect the\n> current_timestamp and see where the code was spending most of it time....The\n> output is as follows:\n>\n> log_text\n> |\n> insertdatetime\n> -------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------\n> INPUT\n> VARIABLES,Src,emp_id_1,emp_id_100,StartDate,2012-01-01,EndDate,2012-02-01,Tou,DOnPk,CRR\n> Type,ptp-obligations,MinPositivePirce,1.00 | 2012-04-25 20:56:42.691965+00\n> Starting get_DAM_Value function\n>\n> | 2012-04-25 20:56:42.691965+00\n> StartDate,2012-01-01 01:00:00,EndDate,2012-02-02 00:00:00\n> |\n> 2012-04-25 20:56:42.691965+00\n> StartHr,vSrcprice,vSinkprice,vDiff,vAvgValue,vTotalDAMValue,vPkTotal,vOffPkTotal\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 01:00:00,18.52,16.15,-2.370000,-2.370000,0.000000,0.000000,-2.370000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 02:00:00,17.22,15.60,-1.620000,-1.620000,0.000000,0.000000,-3.990000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 03:00:00,18.22,17.55,-0.670000,-0.670000,0.000000,0.000000,-4.660000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 04:00:00,18.53,18.13,-0.400000,-0.400000,0.000000,0.000000,-5.060000\n> | 2012-04-25\n> 20:56:42.691965+00\n> ...\n> ...\n> ...\n> ...\n> 2012-02-02\n> 00:00:00,2,17.13,17.13,0.000000,0.000000,0.000000,-7940.250000,-5216.290000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-02-02\n> 00:00:00,3,16.54,16.54,0.000000,0.000000,0.000000,-7940.250000,-5216.290000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-02-02\n> 00:00:00,4,16.27,16.27,0.000000,0.000000,0.000000,-7940.250000,-5216.290000\n> | 2012-04-25\n> 20:56:42.691965+00\n> TotalPEAKMVal,-7940.250000,NumIntervals,2034,AvgRTMVAlue,-3.903761\n> |\n> 2012-04-25 20:56:42.691965+00\n> OUTPUT\n> VARIABLES,AvgPurValue,-2.84,AvgSaleValue,-3.90,Profit,-1.06,ProfitPercentage,-106.00\n> | 2012-04-25\n> 20:56:42.691965+00\n> (3832 rows)\n>\n> Why doesn't the current_timestamp value change within the pgpsql code? For\n> every select from a table to compute, I am inserting into my debug_log dummy\n> table. For all 3832 rows I have the same current_timestamp value which I was\n> hoping to get by the following insert statement:\n> insert into debug_log ('some log text', current_timestamp);\n>\n> But when I run this procedure from the psql command line, it takes 11\n> seconds....I gave a bigger date range than what I stated as (40 ms) in my\n> first post hence the change to 11 seconds.\n>\n> dev=> select\n> calc_comp('emp_id_1','emp_id_100','DAM-RTM',to_date('2012-01-01','yyyy-mm-dd'),to_date('2012-02-01','yyyy-mm-dd'),'DOnPk','ptp-obligations',1.00,\n> 0);\n> crr_valuation\n> -------------------------------\n> 0||-2.84|-3.90|-1.06|-106.00|\n> (1 row)\n>\n> Time: 11685.828 ms.\n>\n> The last input value 1/0 while calling the above function controls, if any\n> log line messages should be inserted or not, 0=insert, 1 = do_not_insert.\n> When I toggle the flag the overall timing did not change. Does it not some\n> time in ms to write 3832 rows to a table?\n> Why is my current_timestamp value not changing. I was expecting the\n> difference between the last row's timestamp value MINUS the first row's\n> tiemstamp value to equal my 11.685 seconds. What is wrong here?\n>\n>\n> -Venki\n>\n> ________________________________\n> From: Samuel Gendler <[email protected]>\n>\n> To: Venki Ramachandran <[email protected]>\n> Cc: \"[email protected]\" <[email protected]>\n> Sent: Wednesday, April 25, 2012 12:36 PM\n>\n> Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem\n>\n>\n>\n> On Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran\n> <[email protected]> wrote:\n>\n> Hi all:\n> Can someone please guide me as to how to solve this problem? If this is the\n> wrong forum, please let me know which one to post this one in. I am new to\n> Postgres (about 3 months into it)\n>\n> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program\n> that does some computation. It takes in a date range and for one pair of\n> personnel (two employees in a company) it calculates some values over the\n> time period. It takes about 40ms (milli seconds) to complete and give me the\n> answer. All good so far.\n>\n> Now I have to run the same pgplsql on all possible combinations of employees\n> and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n>\n>\n> The question is, how much of that 40ms is spent performing the calculation,\n> how much is spent querying, and how much is function call overhead, and how\n> much is round trip between the client and server with the query and results?\n> Depending upon the breakdown, it is entirely possible that the actual\n> per-record multiplier can be kept down to a couple of milliseconds if you\n> restructure things to query data in bulk and only call a single function to\n> do the work. If you get it down to 4ms, that's a 20 minute query. Get it\n> down to 1ms and you're looking at only 5 minutes for what would appear to be\n> a fairly compute-intensive report over a relatively large dataset.\n>\n>\n>\n>\n",
"msg_date": "Wed, 25 Apr 2012 23:26:22 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Replacing current_timestamp() with transaction_timestamp() and statement_timestamp() did not help!!!. \n\nMy timestamp values are still the same. I can't believe this is not possible in PG. In oracle you can use 'select sysdate from dual' and insert that value and you can see which sql statement or function is taking a long time during development mode. All I want to do is to find where my pgplsql code is spending its 11 seconds. The overall pseudo code is as follows:\n\nwhile end_date >= start_date\nselect some_columns INTO vars1 from tbl-1 where emp_id = 'first emp';\nselect some_columns INTO vars2 from tbl-1 where emp_id = 'second emp';\nDo some computation with vars1 and vars2;\nselect calc_comp(emp_1, emp_2) into v_profit;\nend while\n\nI want to see for each select and each call to the function what the system date is (or something else) which will tell me where the 11 seconds is being spent. How do I do that?\n\nThanks, Venki\n \n\n\n________________________________\n From: Pavel Stehule <[email protected]>\nTo: Venki Ramachandran <[email protected]> \nCc: Samuel Gendler <[email protected]>; \"[email protected]\" <[email protected]> \nSent: Wednesday, April 25, 2012 2:26 PM\nSubject: Re: [PERFORM] Parallel Scaling of a pgplsql problem\n \nHello\n\nhttp://www.postgresql.org/docs/8.1/static/functions-datetime.html\n\nCURRENT_TIME and CURRENT_TIMESTAMP deliver values with time zone;\nLOCALTIME and LOCALTIMESTAMP deliver values without time zone.\n\nCURRENT_TIME, CURRENT_TIMESTAMP, LOCALTIME, and LOCALTIMESTAMP can\noptionally take a precision parameter, which causes the result to be\nrounded to that many fractional digits in the seconds field. Without a\nprecision parameter, the result is given to the full available\nprecision.\n\nSome examples:\n\nSELECT CURRENT_TIME;\nResult: 14:39:53.662522-05\n\nSELECT CURRENT_DATE;\nResult: 2001-12-23\n\nSELECT CURRENT_TIMESTAMP;\nResult: 2001-12-23 14:39:53.662522-05\n\nSELECT CURRENT_TIMESTAMP(2);\nResult: 2001-12-23 14:39:53.66-05\n\nSELECT LOCALTIMESTAMP;\nResult: 2001-12-23 14:39:53.662522\n\nSince these functions return the start time of the current\ntransaction, their values do not change during the transaction. This\nis considered a feature: the intent is to allow a single transaction\nto have a consistent notion of the \"current\" time, so that multiple\nmodifications within the same transaction bear the same time stamp.\n\n Note: Other database systems might advance these values more frequently.\n\nPostgreSQL also provides functions that return the start time of the\ncurrent statement, as well as the actual current time at the instant\nthe function is called. The complete list of non-SQL-standard time\nfunctions is:\n\ntransaction_timestamp()\nstatement_timestamp()\n\nRegards\n\nPavel Stehule\n\n2012/4/25 Venki Ramachandran <[email protected]>:\n> Another question (probably a silly mistake) while debugging this problem:\n> I put in insert statements into the pgplsql code to collect the\n> current_timestamp and see where the code was spending most of it time....The\n> output is as follows:\n>\n> log_text\n> |\n> insertdatetime\n> -------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------\n> INPUT\n> VARIABLES,Src,emp_id_1,emp_id_100,StartDate,2012-01-01,EndDate,2012-02-01,Tou,DOnPk,CRR\n> Type,ptp-obligations,MinPositivePirce,1.00 | 2012-04-25 20:56:42.691965+00\n> Starting get_DAM_Value function\n>\n> | 2012-04-25 20:56:42.691965+00\n> StartDate,2012-01-01 01:00:00,EndDate,2012-02-02 00:00:00\n> |\n> 2012-04-25 20:56:42.691965+00\n> StartHr,vSrcprice,vSinkprice,vDiff,vAvgValue,vTotalDAMValue,vPkTotal,vOffPkTotal\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 01:00:00,18.52,16.15,-2.370000,-2.370000,0.000000,0.000000,-2.370000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 02:00:00,17.22,15.60,-1.620000,-1.620000,0.000000,0.000000,-3.990000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 03:00:00,18.22,17.55,-0.670000,-0.670000,0.000000,0.000000,-4.660000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-01-01\n> 04:00:00,18.53,18.13,-0.400000,-0.400000,0.000000,0.000000,-5.060000\n> | 2012-04-25\n> 20:56:42.691965+00\n> ...\n> ...\n> ...\n> ...\n> 2012-02-02\n> 00:00:00,2,17.13,17.13,0.000000,0.000000,0.000000,-7940.250000,-5216.290000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-02-02\n> 00:00:00,3,16.54,16.54,0.000000,0.000000,0.000000,-7940.250000,-5216.290000\n> | 2012-04-25\n> 20:56:42.691965+00\n> 2012-02-02\n> 00:00:00,4,16.27,16.27,0.000000,0.000000,0.000000,-7940.250000,-5216.290000\n> | 2012-04-25\n> 20:56:42.691965+00\n> TotalPEAKMVal,-7940.250000,NumIntervals,2034,AvgRTMVAlue,-3.903761\n> |\n> 2012-04-25 20:56:42.691965+00\n> OUTPUT\n> VARIABLES,AvgPurValue,-2.84,AvgSaleValue,-3.90,Profit,-1.06,ProfitPercentage,-106.00\n> | 2012-04-25\n> 20:56:42.691965+00\n> (3832 rows)\n>\n> Why doesn't the current_timestamp value change within the pgpsql code? For\n> every select from a table to compute, I am inserting into my debug_log dummy\n> table. For all 3832 rows I have the same current_timestamp value which I was\n> hoping to get by the following insert statement:\n> insert into debug_log ('some log text', current_timestamp);\n>\n> But when I run this procedure from the psql command line, it takes 11\n> seconds....I gave a bigger date range than what I stated as (40 ms) in my\n> first post hence the change to 11 seconds.\n>\n> dev=> select\n> calc_comp('emp_id_1','emp_id_100','DAM-RTM',to_date('2012-01-01','yyyy-mm-dd'),to_date('2012-02-01','yyyy-mm-dd'),'DOnPk','ptp-obligations',1.00,\n> 0);\n> crr_valuation\n> -------------------------------\n> 0||-2.84|-3.90|-1.06|-106.00|\n> (1 row)\n>\n> Time: 11685.828 ms.\n>\n> The last input value 1/0 while calling the above function controls, if any\n> log line messages should be inserted or not, 0=insert, 1 = do_not_insert.\n> When I toggle the flag the overall timing did not change. Does it not some\n> time in ms to write 3832 rows to a table?\n> Why is my current_timestamp value not changing. I was expecting the\n> difference between the last row's timestamp value MINUS the first row's\n> tiemstamp value to equal my 11.685 seconds. What is wrong here?\n>\n>\n> -Venki\n>\n> ________________________________\n> From: Samuel Gendler <[email protected]>\n>\n> To: Venki Ramachandran <[email protected]>\n> Cc: \"[email protected]\" <[email protected]>\n> Sent: Wednesday, April 25, 2012 12:36 PM\n>\n> Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem\n>\n>\n>\n> On Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran\n> <[email protected]> wrote:\n>\n> Hi all:\n> Can someone please guide me as to how to solve this problem? If this is the\n> wrong forum, please let me know which one to post this one in. I am new to\n> Postgres (about 3 months into it)\n>\n> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program\n> that does some computation. It takes in a date range and for one pair of\n> personnel (two employees in a company) it calculates some values over the\n> time period. It takes about 40ms (milli seconds) to complete and give me the\n> answer. All good so far.\n>\n> Now I have to run the same pgplsql on all possible combinations of employees\n> and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n>\n>\n> The question is, how much of that 40ms is spent performing the calculation,\n> how much is spent querying, and how much is function call overhead, and how\n> much is round trip between the client and server with the query and results?\n> Depending upon the breakdown, it is entirely possible that the actual\n> per-record multiplier can be kept down to a couple of milliseconds if you\n> restructure things to query data in bulk and only call a single function to\n> do the work. If you get it down to 4ms, that's a 20 minute query. Get it\n> down to 1ms and you're looking at only 5 minutes for what would appear to be\n> a fairly compute-intensive report over a relatively large dataset.\n>\n>\n>\n>\nReplacing current_timestamp() with transaction_timestamp() and statement_timestamp() did not help!!!. My timestamp values are still the same. I can't believe this is not possible in PG. In\n oracle you can use 'select sysdate from dual' and insert that value and you can see which sql statement or function is taking a long time during development mode. All I want to do is to find where my pgplsql code is spending its 11 seconds. The overall pseudo code is as follows:while end_date >= start_date select some_columns INTO vars1 from tbl-1\n where emp_id = 'first emp'; select some_columns INTO vars2 from tbl-1 where emp_id = 'second emp'; Do some computation with vars1 and vars2; select\n calc_comp(emp_1, emp_2) into v_profit;end whileI want to see for each select and each call to the function what the system date is (or something else) which will tell me where the 11 seconds is being spent. How do I do that?Thanks, Venki From: Pavel Stehule <[email protected]> To: Venki Ramachandran <[email protected]> Cc: Samuel Gendler <[email protected]>; \"[email protected]\" <[email protected]> Sent: Wednesday, April 25, 2012 2:26 PM Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem \nHellohttp://www.postgresql.org/docs/8.1/static/functions-datetime.htmlCURRENT_TIME and CURRENT_TIMESTAMP deliver values with time zone;LOCALTIME and LOCALTIMESTAMP deliver values without time zone.CURRENT_TIME, CURRENT_TIMESTAMP, LOCALTIME, and LOCALTIMESTAMP canoptionally take a precision parameter, which causes the result to berounded to that many fractional digits in the seconds field. Without aprecision parameter, the result is given to the full availableprecision.Some examples:SELECT CURRENT_TIME;Result: 14:39:53.662522-05SELECT CURRENT_DATE;Result: 2001-12-23SELECT CURRENT_TIMESTAMP;Result: 2001-12-23 14:39:53.662522-05SELECT CURRENT_TIMESTAMP(2);Result: 2001-12-23 14:39:53.66-05SELECT LOCALTIMESTAMP;Result: 2001-12-23 14:39:53.662522Since these functions return the start time of the currenttransaction, their values do not\n change during the transaction. Thisis considered a feature: the intent is to allow a single transactionto have a consistent notion of the \"current\" time, so that multiplemodifications within the same transaction bear the same time stamp. Note: Other database systems might advance these values more frequently.PostgreSQL also provides functions that return the start time of thecurrent statement, as well as the actual current time at the instantthe function is called. The complete list of non-SQL-standard timefunctions is:transaction_timestamp()statement_timestamp()RegardsPavel Stehule2012/4/25 Venki Ramachandran <[email protected]>:> Another question (probably a silly mistake) while debugging this problem:> I put in insert statements into the\n pgplsql code to collect the> current_timestamp and see where the code was spending most of it time....The> output is as follows:>> log_text> |> insertdatetime> -------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------> INPUT>\n VARIABLES,Src,emp_id_1,emp_id_100,StartDate,2012-01-01,EndDate,2012-02-01,Tou,DOnPk,CRR> Type,ptp-obligations,MinPositivePirce,1.00 | 2012-04-25 20:56:42.691965+00> Starting get_DAM_Value function>> | 2012-04-25 20:56:42.691965+00> StartDate,2012-01-01 01:00:00,EndDate,2012-02-02 00:00:00> |> 2012-04-25 20:56:42.691965+00> StartHr,vSrcprice,vSinkprice,vDiff,vAvgValue,vTotalDAMValue,vPkTotal,vOffPkTotal> \n | 2012-04-25> 20:56:42.691965+00> 2012-01-01> 01:00:00,18.52,16.15,-2.370000,-2.370000,0.000000,0.000000,-2.370000> | 2012-04-25> 20:56:42.691965+00> 2012-01-01> 02:00:00,17.22,15.60,-1.620000,-1.620000,0.000000,0.000000,-3.990000> | 2012-04-25> 20:56:42.691965+00> 2012-01-01> 03:00:00,18.22,17.55,-0.670000,-0.670000,0.000000,0.000000,-4.660000> \n | 2012-04-25> 20:56:42.691965+00> 2012-01-01> 04:00:00,18.53,18.13,-0.400000,-0.400000,0.000000,0.000000,-5.060000> | 2012-04-25> 20:56:42.691965+00> ...> ...> ...> ...> 2012-02-02> 00:00:00,2,17.13,17.13,0.000000,0.000000,0.000000,-7940.250000,-5216.290000> | 2012-04-25> 20:56:42.691965+00>\n 2012-02-02> 00:00:00,3,16.54,16.54,0.000000,0.000000,0.000000,-7940.250000,-5216.290000> | 2012-04-25> 20:56:42.691965+00> 2012-02-02> 00:00:00,4,16.27,16.27,0.000000,0.000000,0.000000,-7940.250000,-5216.290000> | 2012-04-25> 20:56:42.691965+00> TotalPEAKMVal,-7940.250000,NumIntervals,2034,AvgRTMVAlue,-3.903761> \n |> 2012-04-25 20:56:42.691965+00> OUTPUT> VARIABLES,AvgPurValue,-2.84,AvgSaleValue,-3.90,Profit,-1.06,ProfitPercentage,-106.00> | 2012-04-25> 20:56:42.691965+00> (3832 rows)>> Why doesn't the current_timestamp value change within the pgpsql code? For> every select from a table to compute, I am inserting into my debug_log dummy> table. For all 3832 rows I have the same current_timestamp value which I was> hoping to get by the following insert statement:> insert into debug_log ('some log text', current_timestamp);>> But when I run this procedure from the psql command line, it takes 11>\n seconds....I gave a bigger date range than what I stated as (40 ms) in my> first post hence the change to 11 seconds.>> dev=> select> calc_comp('emp_id_1','emp_id_100','DAM-RTM',to_date('2012-01-01','yyyy-mm-dd'),to_date('2012-02-01','yyyy-mm-dd'),'DOnPk','ptp-obligations',1.00,> 0);> crr_valuation> -------------------------------> 0||-2.84|-3.90|-1.06|-106.00|> (1 row)>> Time: 11685.828 ms.>> The last input value 1/0 while calling the above function controls, if any> log line messages should be inserted or not, 0=insert, 1 = do_not_insert.> When I toggle the flag the overall timing did not change. Does it not some> time in ms to write 3832 rows to a table?> Why is my current_timestamp value not changing. I was expecting the> difference between the last row's timestamp value MINUS the\n first row's> tiemstamp value to equal my 11.685 seconds. What is wrong here?>>> -Venki>> ________________________________> From: Samuel Gendler <[email protected]>>> To: Venki Ramachandran <[email protected]>> Cc: \"[email protected]\" <[email protected]>> Sent: Wednesday, April 25, 2012 12:36 PM>> Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem>>>> On Wed, Apr 25, 2012 at\n 11:52 AM, Venki Ramachandran> <[email protected]> wrote:>> Hi all:> Can someone please guide me as to how to solve this problem? If this is the> wrong forum, please let me know which one to post this one in. I am new to> Postgres (about 3 months into it)>> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program> that does some computation. It takes in a date range and for one pair of> personnel (two employees in a company) it calculates some values over the> time period. It takes about 40ms (milli seconds) to complete and give me the> answer. All good so far.>> Now I have to run the same pgplsql on all possible combinations of employees> and with 542 employees that is about say 300,000 unique pairs.>> So\n (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and> show it on a screen. No user wants to wait for 3 hours, they can probably> wait for 10 minutes (even that is too much for a UI application). How do I> solve this scaling problem? Can I have multiple parellel sessions and each> session have multiple/processes that do a pair each at 40 ms and then> collate the results. Does PostGres or pgplsql have any parallel computing> capability.>>> The question is, how much of that 40ms is spent performing the calculation,> how much is spent querying, and how much is function call overhead, and how> much is round trip between the client and server with the query and results?> Depending upon the breakdown, it is entirely possible that the actual> per-record multiplier can be kept down to a couple of milliseconds if you> restructure things\n to query data in bulk and only call a single function to> do the work. If you get it down to 4ms, that's a 20 minute query. Get it> down to 1ms and you're looking at only 5 minutes for what would appear to be> a fairly compute-intensive report over a relatively large dataset.>>>>",
"msg_date": "Wed, 25 Apr 2012 14:45:38 -0700 (PDT)",
"msg_from": "Venki Ramachandran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Venki Ramachandran <[email protected]> writes:\n> Replacing current_timestamp() with�transaction_timestamp() and�statement_timestamp() did not help!!!.�\n\nYou did not read the documentation you were pointed to. Use\nclock_timestamp().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Apr 2012 17:52:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem "
},
{
"msg_contents": "On Wed, Apr 25, 2012 at 1:52 PM, Venki Ramachandran\n<[email protected]> wrote:\n> Hi all:\n> Can someone please guide me as to how to solve this problem? If this is the\n> wrong forum, please let me know which one to post this one in. I am new to\n> Postgres (about 3 months into it)\n>\n> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program\n> that does some computation. It takes in a date range and for one pair of\n> personnel (two employees in a company) it calculates some values over the\n> time period. It takes about 40ms (milli seconds) to complete and give me the\n> answer. All good so far.\n>\n> Now I have to run the same pgplsql on all possible combinations of employees\n> and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n\nwhat's the final output of the computation -- are you inserting to a\ntable? if so, it should be trivially threaded any number of ways. it's\npretty easy to do it via bash script for example (i can give you some\ncode to do that if you want).\n\nmerlin\n",
"msg_date": "Wed, 25 Apr 2012 17:05:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "Thanks Tom, clock_timestamp() worked. Appreciate it!!! and Sorry was hurrying to get this done at work and hence did not read through.\n\nCan you comment on how you would solve the original problem? Even if I can get the 11 seconds down to 500 ms for one pair, running it for 300k pairs will take multiple hours. How can one write a combination of a bash script/pgplsql code so as to use all 8 cores of a server. I am seeing that this is just executing in one session/process.\n\nthanks and regards, Venki\n\n\n________________________________\n From: Tom Lane <[email protected]>\nTo: Venki Ramachandran <[email protected]> \nCc: Pavel Stehule <[email protected]>; Samuel Gendler <[email protected]>; \"[email protected]\" <[email protected]> \nSent: Wednesday, April 25, 2012 2:52 PM\nSubject: Re: [PERFORM] Parallel Scaling of a pgplsql problem \n \nVenki Ramachandran <[email protected]> writes:\n> Replacing current_timestamp() with transaction_timestamp() and statement_timestamp() did not help!!!. \n\nYou did not read the documentation you were pointed to. Use\nclock_timestamp().\n\n regards, tom lane\nThanks Tom, clock_timestamp() worked. Appreciate it!!! and Sorry was hurrying to get this done at work and hence did not read through.Can you comment on how you would solve the original problem? Even if I can get the 11 seconds down to 500 ms for one pair, running it for 300k pairs will take multiple hours. How can one write a combination of a bash script/pgplsql code so as to use all 8 cores of a server. I am seeing that this is just executing in one session/process.thanks and regards, Venki From: Tom Lane <[email protected]> To: Venki Ramachandran <[email protected]> Cc: Pavel Stehule <[email protected]>; Samuel Gendler <[email protected]>; \"[email protected]\" <[email protected]> Sent: Wednesday, April 25, 2012 2:52 PM Subject: Re: [PERFORM] Parallel Scaling of a pgplsql problem \nVenki Ramachandran <[email protected]> writes:> Replacing current_timestamp() with transaction_timestamp() and statement_timestamp() did not help!!!. You did not read the documentation you were pointed to. Useclock_timestamp(). regards, tom lane",
"msg_date": "Wed, 25 Apr 2012 19:40:18 -0700 (PDT)",
"msg_from": "Venki Ramachandran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem "
},
{
"msg_contents": "On Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <\[email protected]> wrote:\n\n> Hi all:\n> Can someone please guide me as to how to solve this problem? If this is\n> the wrong forum, please let me know which one to post this one in. I am new\n> to Postgres (about 3 months into it)\n>\n> I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql\n> program that does some computation. It takes in a date range and for one\n> pair of personnel (two employees in a company) it calculates some values\n> over the time period. It takes about 40ms (milli seconds) to complete and\n> give me the answer. All good so far.\n>\n> Now I have to run the same pgplsql on all possible combinations of\n> employees and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n>\n\nSetting aside the database concurrency question, have you considered\napplication-level solutions?\n\nHow often does a user expect their rank to change? If a daily rank change\nis fine, trigger the (lengthy) ranking calculation nightly and cache the\nresults in a materialized view for all users; you could continuously\nrebuild the view to improve freshness to within 4 hours. To go faster with\nan application-level solution, you will have to reduce your calculation to\n*what's* likely to be most important to the individual which, again, you\ncan cache; or, if you can predict *who's* most likely to request a ranking,\ncalculate these first; or, both.\n\nThese are likely good things to consider regardless of any improvements you\nmake to the back-end ranking calculation, though at you will hit a point of\ndiminishing returns if your ranking calculation drops below some\n\"tolerable\" wait. In the web world \"tolerable\" is about 3 seconds for the\ngeneral public and about 30 seconds for a captured audience, e.g.,\nemployees. YMMV.\n\n\nCheers,\n\nJan\n\nOn Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <[email protected]> wrote:\nHi all:\nCan someone please guide me as to how to solve this problem? If this is the wrong forum, please let me know which one to post this one in. I am new to Postgres (about 3 months into it)I have PostGres 9.0 database in a AWS server (x-large) and a pgplsql program that does some computation. It takes in a date range and for one pair of personnel (two employees in a company) it calculates some values over the time period. It takes about 40ms (milli seconds) to complete and give me the answer. All good so far.\nNow I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No\n user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.\nSetting aside the database concurrency question, have you considered application-level solutions?\nHow often does a user expect their rank to change? If a daily rank change is fine, trigger the (lengthy) ranking calculation nightly and cache the results in a materialized view for all users; you could continuously rebuild the view to improve freshness to within 4 hours. To go faster with an application-level solution, you will have to reduce your calculation to *what's* likely to be most important to the individual which, again, you can cache; or, if you can predict *who's* most likely to request a ranking, calculate these first; or, both.\nThese are likely good things to consider regardless of any improvements you make to the back-end ranking calculation, though at you will hit a point of diminishing returns if your ranking calculation drops below some \"tolerable\" wait. In the web world \"tolerable\" is about 3 seconds for the general public and about 30 seconds for a captured audience, e.g., employees. YMMV.\nCheers,Jan",
"msg_date": "Wed, 25 Apr 2012 21:41:13 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "On 2012-04-26 04:40, Venki Ramachandran wrote:\n> Thanks Tom, clock_timestamp() worked. Appreciate it!!! and Sorry was \n> hurrying to get this done at work and hence did not read through.\n>\n> Can you comment on how you would solve the original problem? Even if I \n> can get the 11 seconds down to 500 ms for one pair, running it for \n> 300k pairs will take multiple hours. How can one write a combination \n> of a bash script/pgplsql code so as to use all 8 cores of a server. I \n> am seeing that this is just executing in one session/process.\n\nYou want to compare a calculation on the cross product 'employee x \nemployee'. If employee is partitioned into emp1, emp2, ... emp8, the \ncross product is equal to the union of emp1 x employee, emp2 x employee, \n.. emp8 x employee. Each of these 8 cross products on partitions can be \nexecuted in parallel. I'd look into dblink to execute each of the 8 \ncross products in parallel, and then union all of those results.\n\nhttp://www.postgresql.org/docs/9.1/static/contrib-dblink-connect.html\n\nregards,\nYeb\n\n\n\n\n\n\n\n On 2012-04-26 04:40, Venki Ramachandran wrote:\n \n\nThanks Tom, clock_timestamp() worked. Appreciate\n it!!! and Sorry was hurrying to get this done at work and\n hence did not read through.\n\n\nCan you comment on how you would solve the original\n problem? Even if I can get the 11 seconds down to 500 ms\n for one pair, running it for 300k pairs will take multiple\n hours. How can one write a combination of a bash\n script/pgplsql code so as to use all 8 cores of a server. I\n am seeing that this is just executing in one\n session/process.\n\n\n\n You want to compare a calculation on the cross product 'employee x\n employee'. If employee is partitioned into emp1, emp2, ... emp8, the\n cross product is equal to the union of emp1 x employee, emp2 x\n employee, .. emp8 x employee. Each of these 8 cross products on\n partitions can be executed in parallel. I'd look into dblink to\n execute each of the 8 cross products in parallel, and then union all\n of those results.\n\nhttp://www.postgresql.org/docs/9.1/static/contrib-dblink-connect.html\n\n regards,\n Yeb",
"msg_date": "Thu, 26 Apr 2012 08:49:12 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
},
{
"msg_contents": "On Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <\[email protected]> wrote:\n\n>\n> Now I have to run the same pgplsql on all possible combinations of\n> employees and with 542 employees that is about say 300,000 unique pairs.\n>\n> So (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and\n> show it on a screen. No user wants to wait for 3 hours, they can probably\n> wait for 10 minutes (even that is too much for a UI application). How do I\n> solve this scaling problem? Can I have multiple parellel sessions and each\n> session have multiple/processes that do a pair each at 40 ms and then\n> collate the results. Does PostGres or pgplsql have any parallel computing\n> capability.\n>\n\nInteresting problem.\n\nHow frequently does the data change? Hourly, daily, monthly?\nHow granular are the time frames in the typical query? Seconds, minutes,\nhours, days, weeks?\n\nI'm thinking if you can prepare the data ahead of time as it changes via a\ntrigger or client-side code then your problem will go away pretty quickly.\n\n-Greg\n\nOn Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran <[email protected]> wrote:\nNow I have to run the same pgplsql on all possible combinations of employees and with 542 employees that is about say 300,000 unique pairs.\nSo (300000 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to rank them and show it on a screen. No\n user wants to wait for 3 hours, they can probably wait for 10 minutes (even that is too much for a UI application). How do I solve this scaling problem? Can I have multiple parellel sessions and each session have multiple/processes that do a pair each at 40 ms and then collate the results. Does PostGres or pgplsql have any parallel computing capability.\nInteresting problem. How frequently does the data change? Hourly, daily, monthly?How granular are the time frames in the typical query? Seconds, minutes, hours, days, weeks?\nI'm thinking if you can prepare the data ahead of time as it changes via a trigger or client-side code then your problem will go away pretty quickly.-Greg",
"msg_date": "Thu, 26 Apr 2012 10:13:36 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Scaling of a pgplsql problem"
}
] |
[
{
"msg_contents": "Here's a strange thing.\n\nPostgres 9.1.0 on a severely underpowered test machine\n\neffective_cache_size = 128M\nwork_mem = 48M\n\n\nThis query:\n\nWITH\nRECURSIVE subordinates AS (\n\tSELECT id, originator_id FROM partner_deliveries WHERE originator_id\nin (225645)\n\tUNION ALL\n\tSELECT partner_deliveries.id, subordinates.originator_id\n\t\tFROM partner_deliveries, subordinates\n\t\tWHERE partner_deliveries.originator_id = subordinates.id\n),\ndistinct_subordinates AS ( SELECT id, originator_id FROM (\n\tSELECT DISTINCT id, originator_id FROM subordinates\n\tUNION DISTINCT\n\tSELECT id, id FROM partner_deliveries WHERE id in (225645)\n) itab ORDER BY id )\nSELECT\n\ts.originator_id,\n\tsum(o.opens) as opens,\n\tsum(o.clicks) as clicks,\n\tsum(o.questionnaire) as questionnaire,\n\tsum(o.completes) as completes,\n\tsum(o.quotafulls) as quotafulls,\n\tsum(o.screenouts) as screenouts\nFROM overview o\nJOIN distinct_subordinates s ON s.id = o.partner_delivery_id\nGROUP BY s.originator_id;\n\nWorks perfectly: http://explain.depesz.com/s/j9Q\n\nThe plan produces an index scan on overview (roughly 1.5M tuples),\nwhich is desired.\n\nNow, I tried to skip one hashagg to \"speed it up a bit\", and found\nsomething really unexpected:\n\nhttp://explain.depesz.com/s/X1c\n\nfor\n\nWITH\nRECURSIVE subordinates AS (\n\tSELECT id, originator_id FROM partner_deliveries WHERE originator_id\nin (225645)\n\tUNION ALL\n\tSELECT partner_deliveries.id, subordinates.originator_id\n\t\tFROM partner_deliveries, subordinates\n\t\tWHERE partner_deliveries.originator_id = subordinates.id\n),\ndistinct_subordinates AS ( SELECT id, originator_id FROM (\n\tSELECT id, originator_id FROM subordinates\n\tUNION DISTINCT\n\tSELECT id, id FROM partner_deliveries WHERE id in (225645)\n) itab ORDER BY id )\nSELECT\n\ts.originator_id,\n\tsum(o.opens) as opens,\n\tsum(o.clicks) as clicks,\n\tsum(o.questionnaire) as questionnaire,\n\tsum(o.completes) as completes,\n\tsum(o.quotafulls) as quotafulls,\n\tsum(o.screenouts) as screenouts\nFROM overview o\nJOIN distinct_subordinates s ON s.id = o.partner_delivery_id\nGROUP BY s.originator_id;\n\nIf you don't notice, the only difference is I removed the distinct\nfrom the select against the recursive CTE for distinct_subordinates,\nexpecting the union distinct to take care. It did. But it took a whole\n2 seconds longer! (WTF)\n\nFun thing is, nothing in the CTE's execution really changed. The only\nchange, is that now a sequential scan of overview was chosen instead\nof the index.\nWhy could this be? The output (number of search values, even the\nvalues themselves and their order) is the same between both plans.\n",
"msg_date": "Thu, 26 Apr 2012 14:37:54 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird plan variation with recursive CTEs"
},
{
"msg_contents": "On Thu, Apr 26, 2012 at 2:37 PM, Claudio Freire <[email protected]> wrote:\n> Fun thing is, nothing in the CTE's execution really changed. The only\n> change, is that now a sequential scan of overview was chosen instead\n> of the index.\n> Why could this be? The output (number of search values, even the\n> values themselves and their order) is the same between both plans.\n\nI just noticed it's misestimating the output of the union distinct,\nbut not of the select distinct.\n\nOne would expect the estimation procedure to be the same in both cases.\nAny reason for the difference? Any way to \"teach\" pg about it?\n",
"msg_date": "Thu, 26 Apr 2012 15:23:09 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird plan variation with recursive CTEs"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Thu, Apr 26, 2012 at 2:37 PM, Claudio Freire <[email protected]> wrote:\n>> Fun thing is, nothing in the CTE's execution really changed. The only\n>> change, is that now a sequential scan of overview was chosen instead\n>> of the index.\n>> Why could this be? The output (number of search values, even the\n>> values themselves and their order) is the same between both plans.\n\nThe estimated size of the UNION output is a lot different, thus\ndiscouraging use of a nestloop for the outer query's join.\n\n> I just noticed it's misestimating the output of the union distinct,\n> but not of the select distinct.\n\n> One would expect the estimation procedure to be the same in both cases.\n\nNo, I don't think that follows. The UNION code is aware that people\nfrequently write UNION rather than UNION ALL even when they're not\nexpecting any duplicates, so it shies away from assuming that UNION\nwill reduce the number of rows at all. On the other hand, it's not\nat all common to write SELECT DISTINCT unless you actually expect\nsome duplicate removal to happen, so the default assumption in the\nabsence of any stats is different --- looks like it's assuming 10X\ncompression by the DISTINCT operation.\n\nThe real issue here of course is that we have no useful idea how many\nrows will be produced by the recursive-union CTE, let alone what their\nstatistics will be like. So the planner is falling back on rules of\nthumb that are pretty much guaranteed to be wrong in any particular\ncase.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2012 16:58:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird plan variation with recursive CTEs "
}
] |
[
{
"msg_contents": "An update to our system means I'm going to be rewriting every row of\nsome large tables (20 million rows by 15 columns). In a situation\nlike this, can auto-vacuum take care of it, or should I plan on\nvacuum-full/reindex to clean up?\n\nThis is 8.4.4.\n\nThanks,\nCraig\n",
"msg_date": "Thu, 26 Apr 2012 12:49:31 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "auto-vacuum vs. full table update"
},
{
"msg_contents": "\nOn 04/26/2012 12:49 PM, Craig James wrote:\n>\n> An update to our system means I'm going to be rewriting every row of\n> some large tables (20 million rows by 15 columns). In a situation\n> like this, can auto-vacuum take care of it, or should I plan on\n> vacuum-full/reindex to clean up?\n>\n\nIf you rewrite the whole table, you will end up with a table twice the \nsize, it will not be compacted but as the table grows, the old space \nwill be reused.\n\njD\n\n> This is 8.4.4.\n>\n> Thanks,\n> Craig\n>\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/\nPostgreSQL Support, Training, Professional Services and Development\nThe PostgreSQL Conference - http://www.postgresqlconference.org/\n@cmdpromptinc - @postgresconf - 509-416-6579\n",
"msg_date": "Thu, 26 Apr 2012 12:53:32 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto-vacuum vs. full table update"
},
{
"msg_contents": "On 04/26/2012 12:49 PM, Craig James wrote:\n> An update to our system means I'm going to be rewriting every row of\n> some large tables (20 million rows by 15 columns). In a situation\n> like this, can auto-vacuum take care of it, or should I plan on\n> vacuum-full/reindex to clean up?\n>\nIf you want to reclaim the space, a vacuum-full/reindex will do it. But \nyou are probably better off using cluster. Way faster and you get new \nindexes as a by-product. Both methods require an exclusive lock on the \ntable. If you can't afford the downtime, check out pg_reorg \n(http://pgfoundry.org/projects/reorg/)\n\nCheers,\nSteve\n\n",
"msg_date": "Thu, 26 Apr 2012 13:00:19 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto-vacuum vs. full table update"
},
{
"msg_contents": "Craig James <[email protected]> wrote:\n> An update to our system means I'm going to be rewriting every row\n> of some large tables (20 million rows by 15 columns). In a\n> situation like this, can auto-vacuum take care of it, or should I\n> plan on vacuum-full/reindex to clean up?\n> \n> This is 8.4.4.\n \nIf there is any way for you to update in \"chunks\", with a vacuum\nafter each chunk, that will prevent the extreme bloat.\n\n-Kevin\n",
"msg_date": "Thu, 26 Apr 2012 15:11:07 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto-vacuum vs. full table update"
}
] |
[
{
"msg_contents": "I can write a query to solve my requirement in any of the followings :-\n\n1.\nselect *\nfrom a\nwhere NOT EXISTS\n(\nselect 1\nfrom b\nwhere a.id = b.id)\nunion all\nselect *\nfrom b\n\n\n2.\nselect\n(\ncase when b.id is not null then\n b.id\n else\n a.id\n) as id\nfrom a\nleft join b\n on a.id = b.id\n\nAny one please tell me which one is better?\n\nI can write a query to solve my requirement in any of the followings :-1.select *from awhere NOT EXISTS(select 1from bwhere a.id = b.id)\nunion allselect *from b2.select (case when b.id is not null then b.id\n else a.id) as idfrom aleft join b on a.id = b.idAny one please tell me which one is better?",
"msg_date": "Sun, 29 Apr 2012 15:27:19 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOT EXISTS or LEFT JOIN which one is better?"
},
{
"msg_contents": "Al I have looked at this before and I am not sure the effort is worth all\nthe thought about it. Let your explain tell you which is better. I read\nthis link a year ago.\nhttp://stackoverflow.com/questions/227037/can-i-get-better-performance-using-a-join-or-using-exists\n\nOn Sun, Apr 29, 2012 at 5:27 AM, AI Rumman <[email protected]> wrote:\n\n> I can write a query to solve my requirement in any of the followings :-\n>\n> 1.\n> select *\n> from a\n> where NOT EXISTS\n> (\n> select 1\n> from b\n> where a.id = b.id)\n> union all\n> select *\n> from b\n>\n>\n> 2.\n> select\n> (\n> case when b.id is not null then\n> b.id\n> else\n> a.id\n> ) as id\n> from a\n> left join b\n> on a.id = b.id\n>\n> Any one please tell me which one is better?\n>\n\nAl I have looked at this before and I am not sure the effort is worth all the thought about it. Let your explain tell you which is better. I read this link a year ago.http://stackoverflow.com/questions/227037/can-i-get-better-performance-using-a-join-or-using-exists\nOn Sun, Apr 29, 2012 at 5:27 AM, AI Rumman <[email protected]> wrote:\nI can write a query to solve my requirement in any of the followings :-1.select *from awhere NOT EXISTS(select 1from bwhere a.id = b.id)\nunion allselect *from b2.select (case when b.id is not null then b.id\n else a.id) as idfrom aleft join b on a.id = b.id\nAny one please tell me which one is better?",
"msg_date": "Sun, 29 Apr 2012 11:10:22 -0400",
"msg_from": "Rich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT EXISTS or LEFT JOIN which one is better?"
}
] |
[
{
"msg_contents": "hi, all.\n\nwell, i wondered why there is high rate of bo (blocks out). the procedure\nis practically read-only during the whole test. although it's not strictly\nread-only, because in a certain condition, there might be writing to a\ncertain table. but that condition can not be met during this test.\n\nso, i created a ramdisk:\nmkfs -q /dev/ram2 100000\nmkdir -p /ram4\nmount /dev/ram2 /ram4\ndf -H | grep /ram4\n\nand then:\nCREATE TABLESPACE pgram4 OWNER postgres LOCATION '/ram4';\n\nand in postgresql.conf i configured:\ntemp_tablespaces = 'pgram4'\n\nnow, i believe, all the temp-table were in RAM.\nvmstat showed:\nr b swpd free buff cache si so bi bo in cs us sy id wa st\n6 0 0 5916720 69488 1202668 0 0 0 3386 1902 1765 25 3 72 1 0\n9 1 0 5907728 69532 1204212 0 0 0 1392 5375 4510 88 8 3 1 0\n7 0 0 5886584 69672 1205096 0 0 0 1472 5278 4520 88 10 2 0 0\n8 0 0 5877384 69688 1206384 0 0 0 1364 5312 4522 89 8 2 1 0\n8 0 0 5869332 69748 1207188 0 0 0 1296 5285 4437 88 8 3 1 0\n6 1 0 5854404 69852 1208776 0 0 0 2955 5333 4518 88 9 2 0 0\n\n10 times less bo (blocks out)\n5 times less wa (percentage of time spent by cpu waiting to IO)\n2 times less b (wait Queue – Process which are waiting for I/O)\n\nthe overall test result was (just?) ~15% better...\n\nwhen i created the ramdisk with mkfs.ext4 (instead of the default ext2),\nthe performance was the same (~15% better), but vmstat output looked much\nthe same as before (without the ramdisk) !?? why is that?\n\nas i mentioned, the procedure is practically read-only. shouldn't i expect\nbo (blocks out) to be ~0? after forcing temp-tables to be in the RAM, what\nother reasons may be the cause for bo (blocks out)?\n\ni see no point pasting the whole procedure here, since it's very long. the\ngeneral course of the procedure is:\ncreate temp-tables if they are not exist (practically, they do exist)\ndo a lot of: insert into temp-table select from table\nand : insert into temp-table select from table join temp-table....\nafter finished insert into temp-table: analyze temp-table (this was the\nonly way the optimizer behaved properly)\nfinally, open refcursors of select from temp-tables\n\nThanks again.\n\nhi, all.well, i wondered why there is high rate of bo (blocks out). the procedure is practically read-only during the whole test. although it's not strictly read-only, because in a certain condition, there might be writing to a certain table. but that condition can not be met during this test.\nso, i created a ramdisk:mkfs -q /dev/ram2 100000mkdir -p /ram4mount /dev/ram2 /ram4df -H | grep /ram4 and then:CREATE TABLESPACE pgram4 OWNER postgres LOCATION '/ram4';\nand in postgresql.conf i configured:temp_tablespaces = 'pgram4'now, i believe, all the temp-table were in RAM.vmstat showed:r b swpd free buff cache si so bi bo in cs us sy id wa st\n6 0 0 5916720 69488 1202668 0 0 0 3386 1902 1765 25 3 72 1 0\n9 1 0 5907728 69532 1204212 0 0 0 1392 5375 4510 88 8 3 1 0\n7 0 0 5886584 69672 1205096 0 0 0 1472 5278 4520 88 10 2 0 0\n8 0 0 5877384 69688 1206384 0 0 0 1364 5312 4522 89 8 2 1 0\n8 0 0 5869332 69748 1207188 0 0 0 1296 5285 4437 88 8 3 1 0\n6 1 0 5854404 69852 1208776 0 0 0 2955 5333 4518 88 9 2 0 0\n10 times less bo (blocks out)5 times less wa (percentage of time spent by cpu waiting to IO)2 times less b (wait Queue – Process which are waiting for I/O)\n\n\n\nthe overall test result was (just?) ~15% better...when i created the ramdisk with mkfs.ext4 (instead of the default ext2), the performance was the same (~15% better), but vmstat output looked much the same as before (without the ramdisk) !?? why is that?\nas i mentioned, the procedure is practically read-only. shouldn't i expect bo (blocks out) to be ~0? after forcing temp-tables to be in the RAM, what other reasons may be the cause for bo (blocks out)?\ni see no point pasting the whole procedure here, since it's very long. the general course of the procedure is:create temp-tables if they are not exist (practically, they do exist)\ndo a lot of: insert into temp-table select from tableand : insert into temp-table select from table join temp-table....after finished insert into temp-table: analyze temp-table (this was the only way the optimizer behaved properly)\nfinally, open refcursors of select from temp-tablesThanks again.",
"msg_date": "Sun, 29 Apr 2012 16:21:23 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Sun, Apr 29, 2012 at 8:21 AM, Eyal Wilde <[email protected]> wrote:\n> hi, all.\n>\n> well, i wondered why there is high rate of bo (blocks out). the procedure is\n> practically read-only during the whole test. although it's not strictly\n> read-only, because in a certain condition, there might be writing to a\n> certain table. but that condition can not be met during this test.\n>\n> so, i created a ramdisk:\n> mkfs -q /dev/ram2 100000\n> mkdir -p /ram4\n> mount /dev/ram2 /ram4\n> df -H | grep /ram4\n>\n> and then:\n> CREATE TABLESPACE pgram4 OWNER postgres LOCATION '/ram4';\n>\n> and in postgresql.conf i configured:\n> temp_tablespaces = 'pgram4'\n>\n> now, i believe, all the temp-table were in RAM.\n> vmstat showed:\n> r b swpd free buff cache si so bi bo in cs us sy id wa st\n> 6 0 0 5916720 69488 1202668 0 0 0 3386 1902 1765 25 3 72 1 0\n> 9 1 0 5907728 69532 1204212 0 0 0 1392 5375 4510 88 8 3 1 0\n> 7 0 0 5886584 69672 1205096 0 0 0 1472 5278 4520 88 10 2 0 0\n> 8 0 0 5877384 69688 1206384 0 0 0 1364 5312 4522 89 8 2 1 0\n> 8 0 0 5869332 69748 1207188 0 0 0 1296 5285 4437 88 8 3 1 0\n> 6 1 0 5854404 69852 1208776 0 0 0 2955 5333 4518 88 9 2 0 0\n>\n> 10 times less bo (blocks out)\n> 5 times less wa (percentage of time spent by cpu waiting to IO)\n> 2 times less b (wait Queue – Process which are waiting for I/O)\n>\n> the overall test result was (just?) ~15% better...\n>\n> when i created the ramdisk with mkfs.ext4 (instead of the default ext2), the\n> performance was the same (~15% better), but vmstat output looked much the\n> same as before (without the ramdisk) !?? why is that?\n>\n> as i mentioned, the procedure is practically read-only. shouldn't i expect\n> bo (blocks out) to be ~0? after forcing temp-tables to be in the RAM, what\n> other reasons may be the cause for bo (blocks out)?\n>\n> i see no point pasting the whole procedure here, since it's very long. the\n> general course of the procedure is:\n> create temp-tables if they are not exist (practically, they do exist)\n> do a lot of: insert into temp-table select from table\n> and : insert into temp-table select from table join temp-table....\n> after finished insert into temp-table: analyze temp-table (this was the only\n> way the optimizer behaved properly)\n> finally, open refcursors of select from temp-tables\n\ni/o writes from read queries can be caused by a couple of things:\n*) sorts, and other 'spill to disk' features of large queries\n*) hint bits (what I think is happening in your case):\n\nthe first time a tuple is touched after it's controlling transaction\nis committed, the transaction's state (committed or aborted) is saved\non the tuple itself to optimize subsequent accesses. for most\nworkloads this is barely noticeable but it can show up if you're\nmoving a lot of records around per transaction.\n\nmerlin\n",
"msg_date": "Thu, 3 May 2012 08:40:16 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "guess what:\n\nafter reducing bo (blocks out) to ~10% by using a ramdisk (improving\noverall performance by ~15-20%), i now managed to reduced it to ~3% by ....\nremoving the \"analyze temp-table\" statements.\nit also :\nreduced b (Process which are waiting for I/O) to zero\nreduced wa (percentage of time spent by cpu for waiting to IO) to zero\nand reduced id (cpu idle time percent) to be 4 times less.\n\nr b swpd free buff cache si so bi bo in cs us sy id wa st\n8 0 0 6144048 237472 485640 0 0 0 40 4380 3237 79 5 16 0 0\n8 0 0 6138216 238032 485736 0 0 0 40 4741 3506 93 7 0 0 0\n8 0 0 6125256 238276 486484 0 0 0 2709 4801 3447 92 7 1 0 0\n7 0 0 6119400 238376 485792 0 0 0 32 4854 4311 93 6 1 0 0\n5 0 0 6105624 238476 486172 0 0 0 364 4783 3430 92 7 1 0 0\n5 0 0 6092956 238536 485384 0 0 0 416 4954 3652 91 8 2 0 0\n\n\nunfortunately, this time there was no significant performance gain. ):\n\ni afraid now there are certain statements that do not use an optimal\nquery-plan. these statements looks like:\ninsert into temp-table1 (value) select table1.f1 from table1 join\ntemp-table2 on table1.recid=temp-table2.recid where table1.f2 in (x,y,z);\ntemp-table2 never contains more then 10 records.\nthere is an index on table1: recid,f2\nprevious tests showed that the query-optimizer normally chose to do\nhash-join (i.e: ignoring the index), but once we did \"analyze\ntemp-table2;\", the index was used. i read somewhere that the optimizer's\nassumption is that every temp-table contains 1k of records. i believe that\nis the reason for the bad plan. we tried to do \"set\nenable_hashjoin=false;\", but it did not seem to be working inside a\nfunction (although it did work independently). what can we do about that?\n\nanother thing i found is that a sequence on a temp-table is being stored on\nthe current tablespace, and not on the temp_tablespace. would you consider\nthis as a bug?\nanyway, i found a way to not using any sequences on the temp-tables. but\nthis did not change the bo (blocks-out) figures.\n\nmerlin,\nabout the Hint Bits. i read this article:\nhttp://wiki.postgresql.org/wiki/Hint_Bits\nas far as i understand, this is not the case here, because i ran the test\nmany times, and there were no DML operations at all in between. so i\nbelieve that the hint-bits are already cleared in most of the tuples.\n\nThanks again for any more help.\n\nguess what:after reducing bo (blocks out) to ~10% by using a ramdisk (improving overall performance by ~15-20%), i now managed to reduced it to ~3% by .... removing the \"analyze temp-table\" statements. \nit also :reduced b (Process which are waiting for I/O) to zeroreduced wa (percentage of time spent by cpu for waiting to IO) to zeroand reduced id (cpu idle time percent) to be 4 times less.\nr b swpd free buff cache si so bi bo in cs us sy id wa st\n8 0 0 6144048 237472 485640 0 0 0 40 4380 3237 79 5 16 0 0\n8 0 0 6138216 238032 485736 0 0 0 40 4741 3506 93 7 0 0 0\n8 0 0 6125256 238276 486484 0 0 0 2709 4801 3447 92 7 1 0 0\n7 0 0 6119400 238376 485792 0 0 0 32 4854 4311 93 6 1 0 0\n5 0 0 6105624 238476 486172 0 0 0 364 4783 3430 92 7 1 0 0\n5 0 0 6092956 238536 485384 0 0 0 416 4954 3652 91 8 2 0 0\n\nunfortunately, this time there was no significant performance gain. ):\ni afraid now there are certain statements that do not use an optimal query-plan. these statements looks like:insert into temp-table1 (value) select table1.f1 from table1 join temp-table2 on table1.recid=temp-table2.recid where table1.f2 in (x,y,z);\ntemp-table2 never contains more then 10 records.there is an index on table1: recid,f2previous tests showed that the query-optimizer normally chose to do hash-join (i.e: ignoring the index), but once we did \"analyze temp-table2;\", the index was used. i read somewhere that the optimizer's assumption is that every temp-table contains 1k of records. i believe that is the reason for the bad plan. we tried to do \"set enable_hashjoin=false;\", but it did not seem to be working inside a function (although it did work independently). what can we do about that?\nanother thing i found is that a sequence on a temp-table is being stored on the current tablespace, and not on the temp_tablespace. would you consider this as a bug?anyway, i found a way to not using any sequences on the temp-tables. but this did not change the bo (blocks-out) figures.\nmerlin, about the Hint Bits. i read this article: http://wiki.postgresql.org/wiki/Hint_Bitsas far as i understand, this is not the case here, because i ran the test many times, and there were no DML operations at all in between. so i believe that the hint-bits are already cleared in most of the tuples.\nThanks again for any more help.",
"msg_date": "Thu, 3 May 2012 20:07:55 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Thu, May 3, 2012 at 12:07 PM, Eyal Wilde <[email protected]> wrote:\n> guess what:\n>\n> after reducing bo (blocks out) to ~10% by using a ramdisk (improving overall\n> performance by ~15-20%), i now managed to reduced it to ~3% by .... removing\n> the \"analyze temp-table\" statements.\n> it also :\n> reduced b (Process which are waiting for I/O) to zero\n> reduced wa (percentage of time spent by cpu for waiting to IO) to zero\n> and reduced id (cpu idle time percent) to be 4 times less.\n>\n> r b swpd free buff cache si so bi bo in cs us sy id wa st\n> 8 0 0 6144048 237472 485640 0 0 0 40 4380 3237 79 5 16 0 0\n> 8 0 0 6138216 238032 485736 0 0 0 40 4741 3506 93 7 0 0 0\n> 8 0 0 6125256 238276 486484 0 0 0 2709 4801 3447 92 7 1 0 0\n> 7 0 0 6119400 238376 485792 0 0 0 32 4854 4311 93 6 1 0 0\n> 5 0 0 6105624 238476 486172 0 0 0 364 4783 3430 92 7 1 0 0\n> 5 0 0 6092956 238536 485384 0 0 0 416 4954 3652 91 8 2 0 0\n>\n>\n> unfortunately, this time there was no significant performance gain. ):\n>\n> i afraid now there are certain statements that do not use an optimal\n> query-plan. these statements looks like:\n> insert into temp-table1 (value) select table1.f1 from table1 join\n> temp-table2 on table1.recid=temp-table2.recid where table1.f2 in (x,y,z);\n> temp-table2 never contains more then 10 records.\n> there is an index on table1: recid,f2\n> previous tests showed that the query-optimizer normally chose to do\n> hash-join (i.e: ignoring the index), but once we did \"analyze temp-table2;\",\n> the index was used. i read somewhere that the optimizer's assumption is that\n> every temp-table contains 1k of records. i believe that is the reason for\n> the bad plan. we tried to do \"set enable_hashjoin=false;\", but it did not\n> seem to be working inside a function (although it did work independently).\n> what can we do about that?\n\nlet's see the query plan...when you turned it off, did it go faster?\nput your suspicious plans here: http://explain.depesz.com/\n\n> another thing i found is that a sequence on a temp-table is being stored on\n> the current tablespace, and not on the temp_tablespace. would you consider\n> this as a bug?\n> anyway, i found a way to not using any sequences on the temp-tables. but\n> this did not change the bo (blocks-out) figures.\n>\n> merlin,\n> about the Hint Bits. i read this\n> article: http://wiki.postgresql.org/wiki/Hint_Bits\n> as far as i understand, this is not the case here, because i ran the test\n> many times, and there were no DML operations at all in between. so i believe\n> that the hint-bits are already cleared in most of the tuples.\n\nyeah. well, your query was an insert? that would naturally result in\nblocks out.\n\nmerlin\n",
"msg_date": "Fri, 4 May 2012 08:04:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Fri, May 4, 2012 at 3:04 PM, Merlin Moncure <[email protected]> wrote:\n\n> let's see the query plan...when you turned it off, did it go faster?\n> put your suspicious plans here: http://explain.depesz.com/\n\nI suggest to post three plans:\n\n1. insert into temp table\n2. access to temp table before analyze\n3. access to temp table after analyze\n\nMaybe getting rid of the temp table (e.g. using a view or even an\ninline view) is even better than optimizing temp table access.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 9 May 2012 09:11:42 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Wed, May 9, 2012 at 2:11 AM, Robert Klemme\n<[email protected]> wrote:\n> On Fri, May 4, 2012 at 3:04 PM, Merlin Moncure <[email protected]> wrote:\n>\n>> let's see the query plan...when you turned it off, did it go faster?\n>> put your suspicious plans here: http://explain.depesz.com/\n>\n> I suggest to post three plans:\n>\n> 1. insert into temp table\n> 2. access to temp table before analyze\n> 3. access to temp table after analyze\n>\n> Maybe getting rid of the temp table (e.g. using a view or even an\n> inline view) is even better than optimizing temp table access.\n\nyeah -- perhaps a CTE might work as well.\n\nmerlin\n",
"msg_date": "Wed, 9 May 2012 07:53:01 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
}
] |
[
{
"msg_contents": "I'm trying to benchmark Postgres vs. several other databases on my workstation. My workstation is running 64 bit Windows 7. It has 12 gb of RAM and a W3550 @ 3 Ghz. I installed Postgres 9.1 using the windows installer. The data directory is on a 6Gb/s SATA SSD.\n\nMy application is multithreaded and uses pooled connections via JDBC. It's got around 20 threads doing asynchronous transactions against the database. It's about 70% read/30% write. Transactions are very small. There are no long-running transactions. I start with an empty database and I only run about 5,000 business transactions in my benchmark. That results in 10,000 - 15,000 commits.\n\nWhen I first installed Postgres I did no tuning at all and was able to get around 40 commits per-second which is quite slow. I wanted to establish a top-end so I turned off synchronous commit and ran the same test and got the same performance of 40 commits per second. I turned on the \"large system cache\" option on Windows 7 and got the same results. There seems to be some resource issues that's limiting me to 40 commits per second but I can't imagine what it could be or how to detect it.\n\nI'm not necessarily looking for advice on how to increase performance, but I at least need to know how to find the bottleneck.\n\n-- Les Walker\n\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \n\nFor further important information, please see http://www.cantor.com/legal/statement\n\n\n\n\n\n\n\n\n\nI’m trying to benchmark Postgres vs. several other databases on my workstation. My workstation is running 64 bit Windows 7. It has 12 gb of RAM and a W3550 @ 3 Ghz. I installed Postgres 9.1 using the windows installer. The data directory\n is on a 6Gb/s SATA SSD.\n \nMy application is multithreaded and uses pooled connections via JDBC. It’s got around 20 threads doing asynchronous transactions against the database. It’s about 70% read/30% write. Transactions are very small. There are no long-running\n transactions. I start with an empty database and I only run about 5,000 business transactions in my benchmark. That results in 10,000 – 15,000 commits.\n \nWhen I first installed Postgres I did no tuning at all and was able to get around 40 commits per-second which is quite slow. I wanted to establish a top-end so I turned off synchronous commit and ran the same test and got the same performance\n of 40 commits per second. I turned on the “large system cache” option on Windows 7 and got the same results. There seems to be some resource issues that’s limiting me to 40 commits per second but I can’t imagine what it could be or how to detect it.\n \nI’m not necessarily looking for advice on how to increase performance, but I at least need to know how to find the bottleneck.\n \n-- Les Walker\n \n\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \nFor further important information, please see http://www.cantor.com/legal/statement",
"msg_date": "Mon, 30 Apr 2012 13:49:41 +0000",
"msg_from": "\"Walker, James Les\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On 4/30/2012 8:49 AM, Walker, James Les wrote:\n> I’m trying to benchmark Postgres vs. several other databases on my\n> workstation. My workstation is running 64 bit Windows 7. It has 12 gb of\n> RAM and a W3550 @ 3 Ghz. I installed Postgres 9.1 using the windows\n> installer. The data directory is on a 6Gb/s SATA SSD.\n>\n> My application is multithreaded and uses pooled connections via JDBC.\n> It’s got around 20 threads doing asynchronous transactions against the\n> database. It’s about 70% read/30% write. Transactions are very small.\n> There are no long-running transactions. I start with an empty database\n> and I only run about 5,000 business transactions in my benchmark. That\n> results in 10,000 – 15,000 commits.\n>\n> When I first installed Postgres I did no tuning at all and was able to\n> get around 40 commits per-second which is quite slow. I wanted to\n> establish a top-end so I turned off synchronous commit and ran the same\n> test and got the same performance of 40 commits per second. I turned on\n> the “large system cache” option on Windows 7 and got the same results.\n> There seems to be some resource issues that’s limiting me to 40 commits\n> per second but I can’t imagine what it could be or how to detect it.\n>\n> I’m not necessarily looking for advice on how to increase performance,\n> but I at least need to know how to find the bottleneck.\n>\n> -- Les Walker\n>\n\nOne thing I'd look at is your hardware and determine if you are CPU \nbound or IO bound. I use Linux so don't know how you'd do that on windows.\n\nHave you checked your sql statements with \"explain analyze\"?\n\nI don't know anything about config file settings on windows, but on \nLinux its really important. google could probably help you there.\n\nKnowing if you are CPU bound or IO bound, and if you have any bad plans, \nwill tell you what config file changes to make.\n\n-Andy\n\n",
"msg_date": "Mon, 30 Apr 2012 11:26:15 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On Mon, Apr 30, 2012 at 8:49 AM, Walker, James Les <[email protected]> wrote:\n> I’m trying to benchmark Postgres vs. several other databases on my\n> workstation. My workstation is running 64 bit Windows 7. It has 12 gb of RAM\n> and a W3550 @ 3 Ghz. I installed Postgres 9.1 using the windows installer.\n> The data directory is on a 6Gb/s SATA SSD.\n>\n>\n>\n> My application is multithreaded and uses pooled connections via JDBC. It’s\n> got around 20 threads doing asynchronous transactions against the database.\n> It’s about 70% read/30% write. Transactions are very small. There are no\n> long-running transactions. I start with an empty database and I only run\n> about 5,000 business transactions in my benchmark. That results in 10,000 –\n> 15,000 commits.\n>\n>\n>\n> When I first installed Postgres I did no tuning at all and was able to get\n> around 40 commits per-second which is quite slow. I wanted to establish a\n> top-end so I turned off synchronous commit and ran the same test and got the\n> same performance of 40 commits per second. I turned on the “large system\n> cache” option on Windows 7 and got the same results. There seems to be some\n> resource issues that’s limiting me to 40 commits per second but I can’t\n> imagine what it could be or how to detect it.\n>\n>\n>\n> I’m not necessarily looking for advice on how to increase performance, but I\n> at least need to know how to find the bottleneck.\n\nIt's almost certainly coming from postgres being anal about making\nsure the data is syncing all the way back to the ssd through all the\nbuffers. Although ssd are quite fast, if you run them this way they\nare no better than hard drives. Trying turning off fsync in\npostgrsql.conf to be sure. If you're still seeing poor performance,\ntry posting and explain analyze of the queries you think might be\nslowing you down.\n\nAlso, which ssd?\n\nmerlin\n",
"msg_date": "Mon, 30 Apr 2012 16:43:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "Merlin Moncure wrote on 30.04.2012 23:43:\n> Trying turning off fsync in postgrsql.conf to be sure.\n\nThis is a dangerous advise.\nTurning off fsync can potentially corrupt the database in case of a system failure (e.g. power outage).\n\n\n\n\n\n",
"msg_date": "Tue, 01 May 2012 00:00:09 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On Mon, Apr 30, 2012 at 5:00 PM, Thomas Kellerer <[email protected]> wrote:\n> Merlin Moncure wrote on 30.04.2012 23:43:\n>\n>> Trying turning off fsync in postgrsql.conf to be sure.\n>\n>\n> This is a dangerous advise.\n> Turning off fsync can potentially corrupt the database in case of a system\n> failure (e.g. power outage).\n>\n\nsure. that said, we're just trying to figure out why he's getting\naround 40tps. since he's only benchmarking test data it's perfectly\nok to do that.\n\nmerlin\n",
"msg_date": "Mon, 30 Apr 2012 17:04:17 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "Exactly, if turning off fsync gives me 100 commits/sec then I know where my bottleneck is and I can attack it. Keep in mind though that I already turned off synchronous commit -- *really* dangerous -- and it didn't have any effect.\n\n-- Les\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Merlin Moncure\nSent: Monday, April 30, 2012 6:04 PM\nTo: Thomas Kellerer\nCc: [email protected]\nSubject: Re: [PERFORM] Tuning Postgres 9.1 on Windows\n\nOn Mon, Apr 30, 2012 at 5:00 PM, Thomas Kellerer <[email protected]> wrote:\n> Merlin Moncure wrote on 30.04.2012 23:43:\n>\n>> Trying turning off fsync in postgrsql.conf to be sure.\n>\n>\n> This is a dangerous advise.\n> Turning off fsync can potentially corrupt the database in case of a \n> system failure (e.g. power outage).\n>\n\nsure. that said, we're just trying to figure out why he's getting\naround 40tps. since he's only benchmarking test data it's perfectly\nok to do that.\n\nmerlin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \n\nFor further important information, please see http://www.cantor.com/legal/statement\n\n",
"msg_date": "Tue, 1 May 2012 12:51:57 +0000",
"msg_from": "\"Walker, James Les\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On Tue, May 1, 2012 at 7:51 AM, Walker, James Les <[email protected]> wrote:\n> Exactly, if turning off fsync gives me 100 commits/sec then I know where my bottleneck is and I can attack it. Keep in mind though that I already turned off synchronous commit -- *really* dangerous -- and it didn't have any effect.\n\nwell synchronous commit is not as dangerous:\nfsync off + power failure = corrupt database\nsynchronous commit off + power failure = some lost transactions\n\nstill waiting on the ssd model #. worst case scenario is that you tps\nrate is in fact sync bound and you have a ssd without capacitor backed\nbuffers (for example, the intel 320 has them); the probable workaround\nwould be to set the drive cache from write through to write back but\nit would unsafe in that case. in other words, tps rates in the triple\ndigits would be physically impossible.\n\nanother less likely scenario is you are having network issues\n(assuming you are connecting to the database through tcp/ip). 20\nyears in, microsoft is still figuring out how to properly configure a\nnetwork socket.\n\nmerlin\n",
"msg_date": "Tue, 1 May 2012 08:06:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "SSD is OCZ-VERTEX3 MI. Controller is LSI SAS2 2008 Falcon. I'm working on installing EDB. Then I can give you some I/O numbers.\n\n-- Les\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: Tuesday, May 01, 2012 9:07 AM\nTo: Walker, James Les\nCc: Thomas Kellerer; [email protected]\nSubject: Re: [PERFORM] Tuning Postgres 9.1 on Windows\n\nOn Tue, May 1, 2012 at 7:51 AM, Walker, James Les <[email protected]> wrote:\n> Exactly, if turning off fsync gives me 100 commits/sec then I know where my bottleneck is and I can attack it. Keep in mind though that I already turned off synchronous commit -- *really* dangerous -- and it didn't have any effect.\n\nwell synchronous commit is not as dangerous:\nfsync off + power failure = corrupt database synchronous commit off + power failure = some lost transactions\n\nstill waiting on the ssd model #. worst case scenario is that you tps rate is in fact sync bound and you have a ssd without capacitor backed buffers (for example, the intel 320 has them); the probable workaround would be to set the drive cache from write through to write back but it would unsafe in that case. in other words, tps rates in the triple digits would be physically impossible.\n\nanother less likely scenario is you are having network issues (assuming you are connecting to the database through tcp/ip). 20 years in, microsoft is still figuring out how to properly configure a network socket.\n\nmerlin\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \n\nFor further important information, please see http://www.cantor.com/legal/statement\n\n",
"msg_date": "Tue, 1 May 2012 13:14:35 +0000",
"msg_from": "\"Walker, James Les\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On 5/1/2012 8:06 AM, Merlin Moncure wrote:\n> On Tue, May 1, 2012 at 7:51 AM, Walker, James Les<[email protected]> wrote:\n>> Exactly, if turning off fsync gives me 100 commits/sec then I know where my bottleneck is and I can attack it. Keep in mind though that I already turned off synchronous commit -- *really* dangerous -- and it didn't have any effect.\n>\n> well synchronous commit is not as dangerous:\n> fsync off + power failure = corrupt database\n> synchronous commit off + power failure = some lost transactions\n>\n> still waiting on the ssd model #. worst case scenario is that you tps\n> rate is in fact sync bound and you have a ssd without capacitor backed\n> buffers (for example, the intel 320 has them); the probable workaround\n> would be to set the drive cache from write through to write back but\n> it would unsafe in that case. in other words, tps rates in the triple\n> digits would be physically impossible.\n>\n> another less likely scenario is you are having network issues\n> (assuming you are connecting to the database through tcp/ip). 20\n> years in, microsoft is still figuring out how to properly configure a\n> network socket.\n>\n> merlin\n>\n\nEven if its all local, windows doesnt have domain sockets (correct?), so \nall that traffic still has to go thru some bit of network stack, yes?\n\n-Andy\n",
"msg_date": "Tue, 01 May 2012 08:42:14 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On Tue, May 1, 2012 at 8:14 AM, Walker, James Les <[email protected]> wrote:\n> SSD is OCZ-VERTEX3 MI. Controller is LSI SAS2 2008 Falcon. I'm working on installing EDB. Then I can give you some I/O numbers.\n\nIt looks like the ssd doesn't have a nv cache and the raid card is a\nsimple sas hba (which likely isn't doing much for the ssd besides\nmasking TRIM). The OCZ 'pro' versions are the ones with power loss\nprotection (see:\nhttp://hothardware.com/Reviews/OCZ-Vertex-3-Pro-SandForce-SF2000-Based-SSD-Preview/).\n Note the bullet: \"Implements SandForce 2582 Controller with power\nloss data protection\". It doesn't look like the Vertex 3 Pro is out\nyet.\n\nIf my hunch is correct, the issue here is that the drive is being\nasked to sync data physically and SSD really don't perform well when\nthe controller isn't in control of when and how to sync data. However\nfull physical sync is the only way to guarantee data is truly safe in\nthe context of a unexpected power loss (an nv cache is basically a\ncompromise on this point).\n\nmerlin\n",
"msg_date": "Tue, 1 May 2012 08:43:00 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "I installed the enterprisedb distribution and immediately saw a 400% performance increase. Turning off fsck made it an order of magnitude better. I'm now peaking at over 400 commits per second. Does that sound right?\n\nIf I understand what you're saying, then to sustain this high rate I'm going to need a controller that can defer fsync requests from the host because it has some sort of battery backup that guarantees the full write.\n\n-- Les\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: Tuesday, May 01, 2012 9:43 AM\nTo: Walker, James Les\nCc: Thomas Kellerer; [email protected]\nSubject: Re: [PERFORM] Tuning Postgres 9.1 on Windows\n\nOn Tue, May 1, 2012 at 8:14 AM, Walker, James Les <[email protected]> wrote:\n> SSD is OCZ-VERTEX3 MI. Controller is LSI SAS2 2008 Falcon. I'm working on installing EDB. Then I can give you some I/O numbers.\n\nIt looks like the ssd doesn't have a nv cache and the raid card is a simple sas hba (which likely isn't doing much for the ssd besides masking TRIM). The OCZ 'pro' versions are the ones with power loss protection (see:\nhttp://hothardware.com/Reviews/OCZ-Vertex-3-Pro-SandForce-SF2000-Based-SSD-Preview/).\n Note the bullet: \"Implements SandForce 2582 Controller with power loss data protection\". It doesn't look like the Vertex 3 Pro is out yet.\n\nIf my hunch is correct, the issue here is that the drive is being asked to sync data physically and SSD really don't perform well when the controller isn't in control of when and how to sync data. However full physical sync is the only way to guarantee data is truly safe in the context of a unexpected power loss (an nv cache is basically a compromise on this point).\n\nmerlin\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \n\nFor further important information, please see http://www.cantor.com/legal/statement\n\n",
"msg_date": "Tue, 1 May 2012 14:44:15 +0000",
"msg_from": "\"Walker, James Les\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "On Tue, May 1, 2012 at 9:44 AM, Walker, James Les <[email protected]> wrote:\n> I installed the enterprisedb distribution and immediately saw a 400% performance increase. Turning off fsck made it an order of magnitude better. I'm now peaking at over 400 commits per second. Does that sound right?\n\nyeah -- well it's hard to say but that sounds plausible based on what\ni know. it would be helpful to see the queries you're running to get\napples to apples idea of what's going on.\n\n> If I understand what you're saying, then to sustain this high rate I'm going to need a controller that can defer fsync requests from the host because it has some sort of battery backup that guarantees the full write.\n\nyes -- historically, they way to get your tps rate up was to get a\nbattery backed cache. this can give you burst (although not\nnecessarily sustained) tps rates well above what the drive can handle.\n lately, a few of the better built ssd also have on board capacitors\nwhich provide a similar function and allow the drives to safely hit\nhigh tps rates as well. take a good look at the intel 320 and 710\ndrives.\n\nmerlin\n",
"msg_date": "Tue, 1 May 2012 11:45:00 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "Walker, James Les wrote on 01.05.2012 16:44:\n> I installed the enterprisedb distribution and immediately saw a 400% performance increase.\n\nWhat exactly is \"the enterprisedb distribution\"?\nAre you talking about the the Advanced Server?\n\nI would be very surprised if the code base would differ so much to allow such a performance gain.\nCould it be that the default settings for the Advanced Server are different than those of the \"community edition\"?\n\nAnd what did you have installed before that? (as the Windows binary are always distributed by EnterpriseDB)\n\nThomas\n\n",
"msg_date": "Tue, 01 May 2012 19:00:02 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "Yes. I didn't know the proper vernacular :-)\n\nIt is very likely that the default settings are different. I'm looking at that right now.\n\n-- Les Walker\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Thomas Kellerer\nSent: Tuesday, May 01, 2012 1:00 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Tuning Postgres 9.1 on Windows\n\nWalker, James Les wrote on 01.05.2012 16:44:\n> I installed the enterprisedb distribution and immediately saw a 400% performance increase.\n\nWhat exactly is \"the enterprisedb distribution\"?\nAre you talking about the the Advanced Server?\n\nI would be very surprised if the code base would differ so much to allow such a performance gain.\nCould it be that the default settings for the Advanced Server are different than those of the \"community edition\"?\n\nAnd what did you have installed before that? (as the Windows binary are always distributed by EnterpriseDB)\n\nThomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \n\nFor further important information, please see http://www.cantor.com/legal/statement\n\n",
"msg_date": "Tue, 1 May 2012 19:13:34 +0000",
"msg_from": "\"Walker, James Les\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
},
{
"msg_contents": "Turns out the 40% was due to a configuration problem with my application. I'm now getting the same performance with community edition.\n\nIt appears that I'm now CPU bound. My CPU's are all pegged.\n\n-- Les Walker\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Walker, James Les\nSent: Tuesday, May 01, 2012 3:14 PM\nTo: 'Thomas Kellerer'; [email protected]\nSubject: Re: [PERFORM] Tuning Postgres 9.1 on Windows\n\nYes. I didn't know the proper vernacular :-)\n\nIt is very likely that the default settings are different. I'm looking at that right now.\n\n-- Les Walker\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Thomas Kellerer\nSent: Tuesday, May 01, 2012 1:00 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Tuning Postgres 9.1 on Windows\n\nWalker, James Les wrote on 01.05.2012 16:44:\n> I installed the enterprisedb distribution and immediately saw a 400% performance increase.\n\nWhat exactly is \"the enterprisedb distribution\"?\nAre you talking about the the Advanced Server?\n\nI would be very surprised if the code base would differ so much to allow such a performance gain.\nCould it be that the default settings for the Advanced Server are different than those of the \"community edition\"?\n\nAnd what did you have installed before that? (as the Windows binary are always distributed by EnterpriseDB)\n\nThomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nCONFIDENTIAL: This e-mail, including its contents and attachments, if any, are confidential. If you are not the named recipient please notify the sender and immediately delete it. You may not disseminate, distribute, or forward this e-mail message or disclose its contents to anybody else. Copyright and any other intellectual property rights in its contents are the sole property of Cantor Fitzgerald.\n E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.\n Although we routinely screen for viruses, addressees should check this e-mail and any attachments for viruses. We make no representation or warranty as to the absence of viruses in this e-mail or any attachments. Please note that to ensure regulatory compliance and for the protection of our customers and business, we may monitor and read e-mails sent to and from our server(s). \n\nFor further important information, please see http://www.cantor.com/legal/statement\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 May 2012 21:46:40 +0000",
"msg_from": "\"Walker, James Les\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres 9.1 on Windows"
}
] |
[
{
"msg_contents": "Hi,\n We have recently switch our product from MS SQL 2000 to Postgresql \n9.0.7. We have tuned the searches and indexes so that they are very \nclose (often better) to what sql2k was giving us. We are noticing some \ndifferences now in the time it takes for the result set to make it back \nto the client and would like some help finding out why.\n\nWhat we see on the PG side is that if we run:\n Select SomeInt32 from someTable where something Limit 1\nIt consistently returns the results \"instantaneously\" after the fetch \ntime. If we run the same select but ask for more data the fetch time \nstays the same but the row takes longer to come back. Bringing back 400 \nbytes takes 1-2 s but bringing back 866 bytes takes 9 - 11 s.\n\nWe went to the SQL2k server (On the same hardware) and ran the selects \nagain. When bringing back on an int32 PG was faster with the fetch and \nthe row coming back in 1-5 ms and SQL2k coming back in 500-700 ms. This \ntells me that the problem is not related to PG index or Disk. When \nbringing back 400 bytes PG fetch time would be 1-2 ms but the results \nwould take 2-3 s but SQL2k would it bring back in 700-900 ms. Working \nwith 866 bytes, PG fetch time is 1-3 ms with the results coming back in \n9 - 11 s and SQL2k bringing the results back in 2-3 s.\n\nThe head to head test was run in Aqua Data Studio 10.0.8 and ODBC driver \n9.0.3.10. The same slow down happens when use PGadminIII. The differnces \nin time to not occure when running on the pg/sql server computer so I \nthink there is a network component to this.\n\nI know that as you bring back more data it takes longer but why is there \nsuch a difference in the time it takes PG to send the data compared to \nthe time it takes sql2k to send it?\n\nAny thoughts and suggestions are very much appreciated\nThanks\nRon\n-- \n\n*Ronald Hahn* , CCP, CIPS Member\n*DOCFOCUS INC.*\nSuite 103, 17505 - 107 Avenue,\nEdmonton, Alberta T5S 1E5\nPhone: 780.444.5407\nToll Free: 800.661.2727 (ext 6)\nFax: 780.444.5409\nEmail: [email protected]\nSupport:[email protected] <mailto:[email protected]>\nDOCFOCUS.ca <http://docfocus.ca/>\n\nThere are 2 kinds of people in the world.\nThose who can extrapolate from incomplete data\n\n\n\n\n\n\n\n Hi, \n We have recently switch our product from MS SQL 2000 to\n Postgresql 9.0.7. We have tuned the searches and indexes so that\n they are very close (often better) to what sql2k was giving us. We\n are noticing some differences now in the time it takes for the\n result set to make it back to the client and would like some help\n finding out why.\n\n What we see on the PG side is that if we run:\n Select SomeInt32 from someTable where something Limit 1 \n It consistently returns the results \"instantaneously\" after the\n fetch time. If we run the same select but ask for more data the\n fetch time stays the same but the row takes longer to come back. \n Bringing back 400 bytes takes 1-2 s but bringing back 866 bytes\n takes 9 - 11 s. \n\n We went to the SQL2k server (On the same hardware) and ran the\n selects again. When bringing back on an int32 PG was faster with the\n fetch and the row coming back in 1-5 ms and SQL2k coming back in\n 500-700 ms. This tells me that the problem is not related to PG\n index or Disk. When bringing back 400 bytes PG fetch time would be\n 1-2 ms but the results would take 2-3 s but SQL2k would it bring\n back in 700-900 ms. Working with 866 bytes, PG fetch time is 1-3 ms\n with the results coming back in 9 - 11 s and SQL2k bringing the\n results back in 2-3 s.\n\n The head to head test was run in Aqua Data Studio 10.0.8 and ODBC\n driver 9.0.3.10. The same slow down happens when use PGadminIII. The\n differnces in time to not occure when running on the pg/sql server\n computer so I think there is a network component to this.\n\n I know that as you bring back more data it takes longer but why is\n there such a difference in the time it takes PG to send the data\n compared to the time it takes sql2k to send it? \n\n Any thoughts and suggestions are very much appreciated \n Thanks\n Ron\n-- \n\nSignature\n\n\n\n\n\nRonald Hahn\n, CCP, CIPS Member\n\nDOCFOCUS INC.\n\n Suite 103, 17505 - 107 Avenue, \n Edmonton, Alberta T5S 1E5\n Phone: 780.444.5407\n Toll Free: 800.661.2727 (ext 6)\n Fax: 780.444.5409 \n Email: [email protected]\n Support:[email protected]\n\nDOCFOCUS.ca\n\n\n There are 2 kinds of people in the world. \n Those who can extrapolate from incomplete data",
"msg_date": "Mon, 30 Apr 2012 12:32:42 -0600",
"msg_from": "\"Ronald Hahn, DOCFOCUS INC.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Result Set over Network Question"
},
{
"msg_contents": "On Mon, Apr 30, 2012 at 3:32 PM, Ronald Hahn, DOCFOCUS INC.\n<[email protected]> wrote:\n> We went to the SQL2k server (On the same hardware) and ran the selects\n> again. When bringing back on an int32 PG was faster with the fetch and the\n> row coming back in 1-5 ms and SQL2k coming back in 500-700 ms. This tells me\n> that the problem is not related to PG index or Disk. When bringing back 400\n> bytes PG fetch time would be 1-2 ms but the results would take 2-3 s but\n> SQL2k would it bring back in 700-900 ms. Working with 866 bytes, PG fetch\n> time is 1-3 ms with the results coming back in 9 - 11 s and SQL2k bringing\n> the results back in 2-3 s.\n\nI think the opposite. I'm thinking it's quite probable that it's disk\naccess the one killing you. Remember, two different database systems\nmeans two different access patterns.\n\nTo figure it out, you have to provide a lot more information on your\nsystem and your query. Check out how to post \"Slow Query Questions\"\n[0]. Only after getting all that information the people of the list\nwill be able to have a clue as to what your problem is.\n\n[0] http://wiki.postgresql.org/wiki/SlowQueryQuestions\n",
"msg_date": "Thu, 3 May 2012 11:28:18 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "On Mon, Apr 30, 2012 at 1:32 PM, Ronald Hahn, DOCFOCUS INC.\n<[email protected]> wrote:\n> Hi,\n> We have recently switch our product from MS SQL 2000 to Postgresql\n> 9.0.7. We have tuned the searches and indexes so that they are very close\n> (often better) to what sql2k was giving us. We are noticing some\n> differences now in the time it takes for the result set to make it back to\n> the client and would like some help finding out why.\n>\n> What we see on the PG side is that if we run:\n> Select SomeInt32 from someTable where something Limit 1\n> It consistently returns the results \"instantaneously\" after the fetch\n> time. If we run the same select but ask for more data the fetch time stays\n> the same but the row takes longer to come back. Bringing back 400 bytes\n> takes 1-2 s but bringing back 866 bytes takes 9 - 11 s.\n>\n> We went to the SQL2k server (On the same hardware) and ran the selects\n> again. When bringing back on an int32 PG was faster with the fetch and the\n> row coming back in 1-5 ms and SQL2k coming back in 500-700 ms. This tells me\n> that the problem is not related to PG index or Disk. When bringing back 400\n> bytes PG fetch time would be 1-2 ms but the results would take 2-3 s but\n> SQL2k would it bring back in 700-900 ms. Working with 866 bytes, PG fetch\n> time is 1-3 ms with the results coming back in 9 - 11 s and SQL2k bringing\n> the results back in 2-3 s.\n>\n> The head to head test was run in Aqua Data Studio 10.0.8 and ODBC driver\n> 9.0.3.10. The same slow down happens when use PGadminIII. The differnces in\n> time to not occure when running on the pg/sql server computer so I think\n> there is a network component to this.\n\nto rule out network just do:\ncreate temp table scratch as select <your query>...\n\nif it's a lot faster, then you have a probable network issue.\n\nmerlin\n",
"msg_date": "Thu, 3 May 2012 10:00:59 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "After some testing using wiershark (poor mans profiler) to see what was \ngoing on with the network I found that it was the tools I've been using. \nBoth Aqua and PGadminIII have a large overhead per column to get the \nmeta data. MSSQL sends that data upfront so the impact isn't as bad. I'm \nnot sure if it's a limitation of the pgsql protocol vs tds or a \nlimitation of Aqua or a combination of both. At any rate it turns out \nnot to be part of the problem I'm having with my software stalling out \nso I'm back to Square one with my problem.\n\nThanks,\nRon\n\n\n\n\n\n\n\n After some testing using wiershark (poor mans profiler) to see what was\n going on with the network I found that it was the tools I've been\n using. Both Aqua and PGadminIII have a large overhead per column to\n get the meta data. MSSQL sends that data upfront so the impact isn't\n as bad. I'm not sure if it's a limitation of the pgsql protocol vs\n tds or a limitation of Aqua or a combination of both. At any rate it\n turns out not to be part of the problem I'm having with my software\n stalling out so I'm back to Square one with my problem.\n\n Thanks, \n Ron",
"msg_date": "Thu, 03 May 2012 09:28:44 -0600",
"msg_from": "\"Ronald Hahn, DOCFOCUS INC.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "On Thu, May 3, 2012 at 10:28 AM, Ronald Hahn, DOCFOCUS INC.\n<[email protected]> wrote:\n> After some testing using wiershark (poor mans profiler) to see what was\n> going on with the network I found that it was the tools I've been using.\n> Both Aqua and PGadminIII have a large overhead per column to get the meta\n> data. MSSQL sends that data upfront so the impact isn't as bad. I'm not sure\n> if it's a limitation of the pgsql protocol vs tds or a limitation of Aqua or\n> a combination of both. At any rate it turns out not to be part of the\n> problem I'm having with my software stalling out so I'm back to Square one\n> with my problem.\n\nok, let's figure out what the issue is then. first, let's make sure\nit isn't the server that's stalling: configure\nlog_min_duration_statement with an appropriate value so you start\ncatching queries that are taking longer then you think the should be.\n also some client side logging directly wrapping the SQL invocation\ncouldn't hurt. is your application jdbc?\n\nmerlin\n",
"msg_date": "Thu, 3 May 2012 10:40:26 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "On Thu, May 3, 2012 at 5:40 PM, Merlin Moncure <[email protected]> wrote:\n> On Thu, May 3, 2012 at 10:28 AM, Ronald Hahn, DOCFOCUS INC.\n> <[email protected]> wrote:\n>> After some testing using wiershark (poor mans profiler) to see what was\n>> going on with the network I found that it was the tools I've been using.\n>> Both Aqua and PGadminIII have a large overhead per column to get the meta\n>> data. MSSQL sends that data upfront so the impact isn't as bad. I'm not sure\n>> if it's a limitation of the pgsql protocol vs tds or a limitation of Aqua or\n>> a combination of both. At any rate it turns out not to be part of the\n>> problem I'm having with my software stalling out so I'm back to Square one\n>> with my problem.\n\nSo, Ronald, are you saying the different approach to meta data\ntransfer is _not_ the issue?\n\n> ok, let's figure out what the issue is then. first, let's make sure\n> it isn't the server that's stalling: configure\n> log_min_duration_statement with an appropriate value so you start\n> catching queries that are taking longer then you think the should be.\n> also some client side logging directly wrapping the SQL invocation\n> couldn't hurt. is your application jdbc?\n\nRonald said ODBC in his first posting. But since ADS seems to support\nJDBC as well trying that might be a good test to get another data\npoint. Alternative tools for JDBC tests:\n\nhttp://squirrel-sql.sourceforge.net/\nhttp://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html\n\nUsing the PG client remotely with \"\\timing on\" might be an even better idea.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 7 May 2012 14:03:46 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "Robert Klemme, 07.05.2012 14:03:\n> Alternative tools for JDBC tests:\n>\n> http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html\n\nSQL Developer does not support PostgreSQL\n\nThis page:\n\n http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools\n\nalso lists several JDBC based tools.\n\nThomas\n\n\n\n\n",
"msg_date": "Mon, 07 May 2012 14:11:53 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "On Mon, May 7, 2012 at 2:11 PM, Thomas Kellerer <[email protected]> wrote:\n> Robert Klemme, 07.05.2012 14:03:\n>>\n>> Alternative tools for JDBC tests:\n>>\n>> http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html\n>\n> SQL Developer does not support PostgreSQL\n\nLast time I checked (quite a while ago) you could use arbitrary JDBC\ndrivers. There's also\nhttp://docs.oracle.com/cd/E25259_01/appdev.31/e24285/intro.htm#sthref306\n\nAnd this seems to indicate that it's still the case: \"[...] or another\nthird-party driver. [...]\nJDBC URL (Other Third Party Driver): URL for connecting directly from\nJava to the database; overrides any other connection type\nspecification.\"\nhttp://docs.oracle.com/cd/E25259_01/appdev.31/e24285/dialogs.htm#BACDGCIA\n\nI assume Oracle is not interested in aggressively advertizing this\nfeature though.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 7 May 2012 15:44:36 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "Robert Klemme, 07.05.2012 15:44:\n>>> http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html\n>>\n>> SQL Developer does not support PostgreSQL\n>\n> Last time I checked (quite a while ago) you could use arbitrary JDBC\n> drivers. There's also\n> http://docs.oracle.com/cd/E25259_01/appdev.31/e24285/intro.htm#sthref306\n>\n> And this seems to indicate that it's still the case: \"[...] or another\n> third-party driver. [...]\n> JDBC URL (Other Third Party Driver): URL for connecting directly from\n> Java to the database; overrides any other connection type\n> specification.\"\n> http://docs.oracle.com/cd/E25259_01/appdev.31/e24285/dialogs.htm#BACDGCIA\n>\n> I assume Oracle is not interested in aggressively advertizing this\n> feature though.\n\nThat seems to be a documentation bug.\nI tried it, and it definitely does not work (or I am missing something).\n\nTheir release notes at: http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev31-ea-relnotes-487612.html\n\n\nstate:\n\nThird Party Databases\n\n SQL Developer supports IBM DB2 UDB LUW , Microsoft SQL Server and Microsoft Access, MySQL, Sybase Adaptive Server and Teradata.\n See Supported Platforms for details on all third party database releases supported.\n\n\nRegards\nThomas\n\n",
"msg_date": "Mon, 07 May 2012 16:25:44 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
},
{
"msg_contents": "On Mon, May 7, 2012 at 4:25 PM, Thomas Kellerer <[email protected]> wrote:\n> That seems to be a documentation bug.\n> I tried it, and it definitely does not work (or I am missing something).\n\nApparently I am the one who is missing something. :-)\n\n> Their release notes at:\n> http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev31-ea-relnotes-487612.html\n>\n> state:\n>\n> Third Party Databases\n>\n> SQL Developer supports IBM DB2 UDB LUW , Microsoft SQL Server and\n> Microsoft Access, MySQL, Sybase Adaptive Server and Teradata.\n> See Supported Platforms for details on all third party database releases\n> supported.\n\nRight you are, Thomas! Thank you! Sorry for the confusion.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 8 May 2012 11:54:59 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Result Set over Network Question"
}
] |
[
{
"msg_contents": "We just upgraded from 9.0 to 9.1, we're using the same server configuration, that has been confirmed 3 or 4 times over. Any help would be appreciated. If I remove the \"ORDER BY\" it gets fast again because it goes back to using the user_id index, if I remove the LIMIT/OFFSET it gets fast again, obviously I need both of those, but that was just to test and see what would happen.\n\nQuery: SELECT * FROM bookmark_groups WHERE user_id = 6708929 ORDER BY created DESC LIMIT 25 OFFSET 0;\n\nexplain analyze from 9.0:\n\nLimit (cost=1436.78..1436.84 rows=25 width=99) (actual time=15.399..15.403 rows=25 loops=1)\n -> Sort (cost=1436.78..1438.67 rows=757 width=99) (actual time=15.397..15.397 rows=25 loops=1)\n Sort Key: created\n Sort Method: top-N heapsort Memory: 28kB\n -> Index Scan using bookmark_groups_user_id_idx on bookmark_groups (cost=0.00..1415.42 rows=757 width=99) (actual time=0.011..9.953 rows=33868 loops=1)\n Index Cond: (user_id = 6708929)\nTotal runtime: 15.421 ms\n\n\nexplain analyze from 9.1:\n\nLimit (cost=0.00..1801.30 rows=25 width=101) (actual time=1565.071..5002.068 rows=25 loops=1)\n -> Index Scan using bookmark_groups_created_idx on bookmark_groups (cost=0.00..2592431.76 rows=35980 width=101) (actual time=1565.069..5002.062 rows=25 loops=1)\n Filter: (user_id = 6708929)\nTotal runtime: 5002.095 ms\n\n\nDDL:\n\nCREATE TABLE \"public\".\"bookmark_groups\" (\n\"id\" int8 NOT NULL DEFAULT nextval('bookmark_groups_id_seq'::regclass),\n\"user_id\" int4 NOT NULL DEFAULT NULL,\n\"version\" varchar DEFAULT NULL,\n\"created\" timestamp(6) WITH TIME ZONE NOT NULL DEFAULT now(),\n\"username\" varchar NOT NULL DEFAULT NULL,\n\"labels\" varchar DEFAULT NULL,\n\"reference\" varchar NOT NULL DEFAULT NULL,\n\"human\" varchar NOT NULL DEFAULT NULL,\n\"highlight_color\" char(6) DEFAULT NULL,\n\"title\" varchar DEFAULT NULL,\n\"version_id\" int4 NOT NULL DEFAULT NULL,\nCONSTRAINT \"bookmark_groups_pkey1\" PRIMARY KEY (\"id\", \"reference\")\n)\nWITH (OIDS=FALSE);\nALTER TABLE \"public\".\"bookmark_groups\" OWNER TO \"dev\";\nCREATE INDEX \"bookmark_groups_created_idx\" ON \"public\".\"bookmark_groups\" USING btree(created DESC NULLS FIRST);\nCREATE INDEX \"bookmark_groups_user_id_idx\" ON \"public\".\"bookmark_groups\" USING btree(user_id ASC NULLS LAST);\n\n\nWe just upgraded from 9.0 to 9.1, we're using the same server configuration, that has been confirmed 3 or 4 times over. Any help would be appreciated. If I remove the \"ORDER BY\" it gets fast again because it goes back to using the user_id index, if I remove the LIMIT/OFFSET it gets fast again, obviously I need both of those, but that was just to test and see what would happen.Query: SELECT * FROM bookmark_groups WHERE user_id = 6708929 ORDER BY created DESC LIMIT 25 OFFSET 0;explain analyze from 9.0:Limit (cost=1436.78..1436.84 rows=25 width=99) (actual time=15.399..15.403 rows=25 loops=1) -> Sort (cost=1436.78..1438.67 rows=757 width=99) (actual time=15.397..15.397 rows=25 loops=1) Sort Key: created Sort Method: top-N heapsort Memory: 28kB -> Index Scan using bookmark_groups_user_id_idx on bookmark_groups (cost=0.00..1415.42 rows=757 width=99) (actual time=0.011..9.953 rows=33868 loops=1) Index Cond: (user_id = 6708929)Total runtime: 15.421 msexplain analyze from 9.1:Limit (cost=0.00..1801.30 rows=25 width=101) (actual time=1565.071..5002.068 rows=25 loops=1) -> Index Scan using bookmark_groups_created_idx on bookmark_groups (cost=0.00..2592431.76 rows=35980 width=101) (actual time=1565.069..5002.062 rows=25 loops=1) Filter: (user_id = 6708929)Total runtime: 5002.095 msDDL:CREATE TABLE \"public\".\"bookmark_groups\" ( \"id\" int8 NOT NULL DEFAULT nextval('bookmark_groups_id_seq'::regclass), \"user_id\" int4 NOT NULL DEFAULT NULL, \"version\" varchar DEFAULT NULL, \"created\" timestamp(6) WITH TIME ZONE NOT NULL DEFAULT now(), \"username\" varchar NOT NULL DEFAULT NULL, \"labels\" varchar DEFAULT NULL, \"reference\" varchar NOT NULL DEFAULT NULL, \"human\" varchar NOT NULL DEFAULT NULL, \"highlight_color\" char(6) DEFAULT NULL, \"title\" varchar DEFAULT NULL, \"version_id\" int4 NOT NULL DEFAULT NULL, CONSTRAINT \"bookmark_groups_pkey1\" PRIMARY KEY (\"id\", \"reference\"))WITH (OIDS=FALSE);ALTER TABLE \"public\".\"bookmark_groups\" OWNER TO \"dev\";CREATE INDEX \"bookmark_groups_created_idx\" ON \"public\".\"bookmark_groups\" USING btree(created DESC NULLS FIRST);CREATE INDEX \"bookmark_groups_user_id_idx\" ON \"public\".\"bookmark_groups\" USING btree(user_id ASC NULLS LAST);",
"msg_date": "Mon, 30 Apr 2012 16:17:47 -0500",
"msg_from": "Josh Turmel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query got slow from 9.0 to 9.1 upgrade"
},
{
"msg_contents": "On Tue, May 1, 2012 at 12:17 AM, Josh Turmel <[email protected]> wrote:\n> We just upgraded from 9.0 to 9.1, we're using the same server configuration,\n> that has been confirmed 3 or 4 times over. Any help would be appreciated. If\n> I remove the \"ORDER BY\" it gets fast again because it goes back to using the\n> user_id index, if I remove the LIMIT/OFFSET it gets fast again, obviously I\n> need both of those, but that was just to test and see what would happen.\n>\n> Query: SELECT * FROM bookmark_groups WHERE user_id = 6708929 ORDER BY\n> created DESC LIMIT 25 OFFSET 0;\n\nBased on the explain numbers I'd say that 9.0 was fast by accident of\nhaving inaccurate statistics. You can see that 9.0 estimated that 757\nrows have this user_id, while actually it had 33868 rows. 9.1\nestimated a more accurate 35980 rows, and because of that assumed that\nreading the newest created rows would return 25 rows of this user\nrather fast, faster than sorting the 35980 rows. This assumption seems\nto be incorrect, probably because the rows with this user_id are all\nrather old.\n\nYou could try tweaking cpu_index_tuple_cost to be higher so that large\nindex scans get penalized. But ultimately with the current PG version\nthere isn't a good general way to fix this kind of behavior. You can\nrewrite the query to enforce filtering before sorting:\n\nSELECT * FROM (\n SELECT * FROM bookmark_groups WHERE user_id = 6708929\n OFFSET 0 -- Prevents pushdown of ordering and limit\n) AS sub ORDER BY created DESC LIMIT 25 OFFSET 0;\n\nThis is the same issue that Simon Riggs talks about in this e-mail:\nhttp://archives.postgresql.org/message-id/CA+U5nMLbXfUT9cWDHJ3tpxjC3bTWqizBKqTwDgzebCB5bAGCgg@mail.gmail.com\n\nThe more general approach is to be more pessimistic about limited\nfiltered index-scans, or collecting multi-dimensional stats to figure\nout the correlation that all rows for this user are likely to be old.\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Thu, 3 May 2012 15:13:31 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query got slow from 9.0 to 9.1 upgrade"
},
{
"msg_contents": "On 4/30/2012 5:17 PM, Josh Turmel wrote:\n> We just upgraded from 9.0 to 9.1, we're using the same server\n> configuration, that has been confirmed 3 or 4 times over. Any help would\n> be appreciated. If I remove the \"ORDER BY\" it gets fast again because it\n> goes back to using the user_id index, if I remove the LIMIT/OFFSET it\n> gets fast again, obviously I need both of those, but that was just to\n> test and see what would happen.\n\nI had this problem as well and ended up working around it by having the\napplication cache the highest seen user_id and send that back to the\nserver which uses it in a where clause. This way I had just the LIMIT\nand was able to remove the OFFSET and things ran great. I don't know\nhow feasible it is for you to change things application side but it\nworked well for me.\n\nJonathan\n\n",
"msg_date": "Thu, 03 May 2012 09:05:29 -0400",
"msg_from": "Jonathan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query got slow from 9.0 to 9.1 upgrade"
},
{
"msg_contents": "This is very similar with my problem: \nhttp://postgresql.1045698.n5.nabble.com/index-choosing-problem-td5567320.html\n",
"msg_date": "Thu, 03 May 2012 23:42:30 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query got slow from 9.0 to 9.1 upgrade"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using postgresql as database for a hibernate based java oltp\nproject and as in previous projects am totally impressed by\npostgresql's robustness, performance and feature-richness. Thanks for\nthis excellent piece of software.\n\nQuite often Hibernate ends up generating queries with a lot of joins\nwhich usually works well, except for queries which load some\nadditional data based on a previous query (SUBSELECT collections),\nwhich look like:\n\nselect ..... from table1 ... left outer join table 15 .... WHERE\ntable1.id IN (select id .... join table16 ... join table20 WHERE\ntable20.somevalue=?)\n\nStarting with some amount of joins, the optimizer starts to do quite\nsuboptimal things like hash-joining huge tables where selctivity would\nvery low.\nI already raised join_collapse_limit and from_collapse_limit, but\nafter a certain point query planning starts to become very expensive.\n\nHowever, when using \" =ANY(ARRAY(select ...))\" instead of \"IN\" the\nplanner seems to do a lot better, most likely because it treats the\nsubquery as a black-box that needs to be executed independently. I've\nhacked hibernate a bit to use ANY+ARRAY, and it seems to work a lot\nbetter than using \"IN\".\n\nHowever, I am a bit uncertain:\n- Is it safe to use ANY(ARRAY(select ...)) when I know the sub-query\nwill only return a small amount (0-100s) of rows?\n- Shouldn't the optimizer be a bit smarter avoiding optimizing this\ncase in the first place, instead of bailing out later? Should I file a\nbug-report about this problem?\n\nThank you in advance, Clemens\n",
"msg_date": "Tue, 1 May 2012 16:34:10 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "Clemens Eisserer <[email protected]> writes:\n> Quite often Hibernate ends up generating queries with a lot of joins\n> which usually works well, except for queries which load some\n> additional data based on a previous query (SUBSELECT collections),\n> which look like:\n\n> select ..... from table1 ... left outer join table 15 .... WHERE\n> table1.id IN (select id .... join table16 ... join table20 WHERE\n> table20.somevalue=?)\n\n> Starting with some amount of joins, the optimizer starts to do quite\n> suboptimal things like hash-joining huge tables where selctivity would\n> very low.\n> I already raised join_collapse_limit and from_collapse_limit, but\n> after a certain point query planning starts to become very expensive.\n\nWhat PG version are we talking about here?\n\n> However, when using \" =ANY(ARRAY(select ...))\" instead of \"IN\" the\n> planner seems to do a lot better, most likely because it treats the\n> subquery as a black-box that needs to be executed independently. I've\n> hacked hibernate a bit to use ANY+ARRAY, and it seems to work a lot\n> better than using \"IN\".\n\nThat doesn't sound like a tremendously good idea to me. But with\nso few details, it's hard to comment intelligently. Can you provide\na concrete test case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 May 2012 10:43:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN? "
},
{
"msg_contents": "Hi Tom,\n\nThanks for your reply.\n\n> What PG version are we talking about here?\nFor development I use 9.1.3, on the production server is 8.4.7 -\nhappens with both cases.\n\n> That doesn't sound like a tremendously good idea to me.\nCould you elaborate on the downsides of this approach a bit?\n\n> But with\n> so few details, it's hard to comment intelligently.\n> Can you provide a concrete test case?\n\nA self contained testcase would take some time to create (and list\nmembers willing to configure and run), so I hope a query as well as an\nexplain-analyze run will provide more information (done with 9.1.3):\nhttp://pastebin.com/BGRdAPg2\n\nIts kind of the worst-worst case which I will improve later (way too\nmuch relations loaded through join-fetching), but its quite a good way\nto show the issue. Replacing the IN with a ANY(ARRAY()) already yields\na way better plan.\n\nThank you in advance, Clemens\n",
"msg_date": "Tue, 1 May 2012 17:04:57 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "Hi again,\n\n>> That doesn't sound like a tremendously good idea to me.\n> Could you elaborate on the downsides of this approach a bit?\n\nAny other thoughts about the pro/cons replacing IN(subquery) with\n=ANY(ARRAY(subquery))?\nAre there patological cases, except when the subquery returns a huge\namount of rows?\n\nShould Ifile a bug-report about the optimizer trying too hard to\ncollapse the subquery and therefor generating a bad plan?\n\nThank you in advance, Clemens\n",
"msg_date": "Fri, 4 May 2012 15:19:06 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "Hi,\n\nI would be really grateful for feedback regardding this issue. Tom?\n\nShould Ifile a bug-report about the optimizer trying too hard to\ncollapse the subquery and therefor generating a bad plan?\nIts my understanding that a IN shouldn't perform any worse than ANY on\nan ARRAY, right?\n\nThank you in advance, Clemens\n",
"msg_date": "Wed, 9 May 2012 15:27:39 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "On Tue, May 01, 2012 at 04:34:10PM +0200, Clemens Eisserer wrote:\n> Quite often Hibernate ends up generating queries with a lot of joins\n> which usually works well, except for queries which load some\n> additional data based on a previous query (SUBSELECT collections),\n> which look like:\n> \n> select ..... from table1 ... left outer join table 15 .... WHERE\n> table1.id IN (select id .... join table16 ... join table20 WHERE\n> table20.somevalue=?)\n> \n> Starting with some amount of joins, the optimizer starts to do quite\n> suboptimal things like hash-joining huge tables where selctivity would\n> very low.\n> I already raised join_collapse_limit and from_collapse_limit, but\n> after a certain point query planning starts to become very expensive.\n\nSince you have 15+ tables at the top level, the genetic query optimizer should\nbe kicking in and delivering a plan in reasonable time, albeit with plan\nquality hazards. There's a danger zone when the deterministic planner is\nstill in effect but {from,join}_collapse_limit have limited the scope of its\ninvestigation. If you're in that zone and have not hand-tailored your\nexplicit join order, poor plans are unsurprising. What exact configuration\nchanges are you using?\n\nhttp://wiki.postgresql.org/wiki/Server_Configuration\n\n> However, when using \" =ANY(ARRAY(select ...))\" instead of \"IN\" the\n> planner seems to do a lot better, most likely because it treats the\n> subquery as a black-box that needs to be executed independently. I've\n> hacked hibernate a bit to use ANY+ARRAY, and it seems to work a lot\n> better than using \"IN\".\n\nI have also used that transformation to get better plans. It can help for the\nreason you say. Specifically, fewer and different plan types are available\nfor the ANY(ARRAY(...)) case, and the row count estimate from the inner\nsubquery does not propagate upward as it does with IN (SELECT ...).\n\n> However, I am a bit uncertain:\n> - Is it safe to use ANY(ARRAY(select ...)) when I know the sub-query\n> will only return a small amount (0-100s) of rows?\n\nHundreds of rows, no. Consider this example:\n\nCREATE TABLE t (c) AS SELECT * FROM generate_series(1,1000000);\nANALYZE t;\n\\set n 500\nEXPLAIN ANALYZE SELECT * FROM t WHERE c IN (SELECT c FROM t WHERE c <= :n);\nEXPLAIN ANALYZE SELECT * FROM t WHERE c = ANY (ARRAY(SELECT c FROM t WHERE c <= :n));\n\nIN(...):\n Hash Semi Join (cost=16931.12..33986.58 rows=490 width=4) (actual time=582.421..2200.322 rows=500 loops=1)\n Hash Cond: (public.t.c = public.t.c)\n -> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=4) (actual time=0.093..785.330 rows=1000000 loops=1)\n -> Hash (cost=16925.00..16925.00 rows=490 width=4) (actual time=582.289..582.289 rows=500 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 18kB\n -> Seq Scan on t (cost=0.00..16925.00 rows=490 width=4) (actual time=0.026..581.766 rows=500 loops=1)\n Filter: (c <= 500)\n Rows Removed by Filter: 999500\n Total runtime: 2200.767 ms\n\nANY(ARRAY(...)):\n Seq Scan on t (cost=16925.00..43850.00 rows=10 width=4) (actual time=305.543..11748.014 rows=500 loops=1)\n Filter: (c = ANY ($0))\n Rows Removed by Filter: 999500\n InitPlan 1 (returns $0)\n -> Seq Scan on t (cost=0.00..16925.00 rows=490 width=4) (actual time=0.012..304.748 rows=500 loops=1)\n Filter: (c <= 500)\n Rows Removed by Filter: 999500\n Total runtime: 11748.348 ms\n\nNote also the difference in output row estimates; that doesn't affect planning\nhere, but it could matter a lot if this snippet became part of a larger query.\n\nCut \"n\" to 5, though, and the ANY plan beats the IN plan at 800ms vs. 2400ms.\n(Exact timing figures are fairly unstable on this particular test.) It\nappears that, unsurprisingly, evaluating a short filter list is cheaper than\nprobing a hash table.\n\n> - Shouldn't the optimizer be a bit smarter avoiding optimizing this\n> case in the first place, instead of bailing out later? Should I file a\n> bug-report about this problem?\n\nFiling a bug report with the content you've already posted would not add much,\nbut a self-contained test case could prove useful. Many of the deficiencies\nthat can make ANY(ARRAY(...)) win do represent unimplemented planner\nintelligence more than bugs.\n\nIncidentally, you can isolate whether ANY(ARRAY(...))'s advantage comes solely\nfrom suppressing the subquery collapse. Keep \"IN\" but tack \"OFFSET 0\" onto\nthe subquery. If this gives the same performance as ANY(ARRAY(...)), then the\nsubquery-collapse suppression was indeed the source of advantage.\n\nnm\n",
"msg_date": "Thu, 10 May 2012 04:20:39 -0400",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "Hello Noah,\n\nThanks a lot for your feedback and explanations.\n\n> Since you have 15+ tables at the top level, the genetic query optimizer should\n> be kicking in and delivering a plan in reasonable time, albeit with plan\n> quality hazards. There's a danger zone when the deterministic planner is\n> still in effect but {from,join}_collapse_limit have limited the scope of its\n> investigation. If you're in that zone and have not hand-tailored your\n> explicit join order, poor plans are unsurprising. What exact configuration\n> changes are you using?\n\nBasically only the changes, suggested here a year ago, which made the\nproblem go away for less complex queries:\n\ngeqo_threshold = 20\nfrom_collapse_limit = 13\njoin_collapse_limit = 13\n\n\n> Hundreds of rows, no. Consider this example:\n> IN(...):\n> Total runtime: 2200.767 ms\n>\n> ANY(ARRAY(...)):\n> Total runtime: 11748.348 ms\n\nIn case there is an index on C, the resulting index scan is, even with\n1000 elements, 3 times faster on my Notebook.\nHowever, both queries execute in next-to-no time (15 vs 5ms).\n\n> Filing a bug report with the content you've already posted would not add much,\n> but a self-contained test case could prove useful. Many of the deficiencies\n> that can make ANY(ARRAY(...)) win do represent unimplemented planner\n> intelligence more than bugs.\n>\n> Incidentally, you can isolate whether ANY(ARRAY(...))'s advantage comes solely\n> from suppressing the subquery collapse. Keep \"IN\" but tack \"OFFSET 0\" onto\n> the subquery. If this gives the same performance as ANY(ARRAY(...)), then the\n> subquery-collapse suppression was indeed the source of advantage.\n\nI see your point, some dumb logic to replace IN with ANY(ARRAY\nwouldn't always yield better results.\nI'll try to come up with a self-containing testcase.\n\nThanks again, Clemens\n",
"msg_date": "Sun, 13 May 2012 16:35:30 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "On Tue, May 01, 2012 at 04:34:10PM +0200, Clemens Eisserer wrote:\n> select ..... from table1 ... left outer join table 15 .... WHERE\n> table1.id IN (select id .... join table16 ... join table20 WHERE\n> table20.somevalue=?)\n> \n> Starting with some amount of joins, the optimizer starts to do quite\n> suboptimal things like hash-joining huge tables where selctivity would\n> very low.\n> I already raised join_collapse_limit and from_collapse_limit, but\n> after a certain point query planning starts to become very expensive.\n\nOn Sun, May 13, 2012 at 04:35:30PM +0200, Clemens Eisserer wrote:\n> > Since you have 15+ tables at the top level, the genetic query optimizer should\n> > be kicking in and delivering a plan in reasonable time, albeit with plan\n> > quality hazards. ??There's a danger zone when the deterministic planner is\n> > still in effect but {from,join}_collapse_limit have limited the scope of its\n> > investigation. ??If you're in that zone and have not hand-tailored your\n> > explicit join order, poor plans are unsurprising. ??What exact configuration\n> > changes are you using?\n> \n> Basically only the changes, suggested here a year ago, which made the\n> problem go away for less complex queries:\n> \n> geqo_threshold = 20\n> from_collapse_limit = 13\n> join_collapse_limit = 13\n\nGiven those settings and the query above, the planner will break the 15\ntop-level tables into lists of 13 and 2 tables, the 20 subquery tables into\nlists of 13 and 7 tables. The split points arise from order of appearance in\nthe query text. The planner then optimizes join order within each list but\nnot across lists. That perfectly explains your observation of \"hash-joining\nhuge tables where selctivity would very low\".\n\nIf it were me, I would try two things. First, set from_collapse_limit = 100,\njoin_collapse_limit = 100, geqo_threshold = 8. This will let the planner\nconsider all join orders for the 35 tables; it will use the genetic query\noptimizer to choose one. See if the plan time and resulting plan are decent.\n\nSecond, set from_collapse_limit = 100, join_collapse_limit = 100, geqo = off\nand EXPLAIN the query. This will take forever and might exhaust all system\nmemory. If it does complete, you'll see the standard planner's opinion of an\noptimal join order. You can then modify the textual join order in your query\nto get the same plan despite returning to a lower join_collapse_limit. Since\nHibernate is generating your queries, that may prove inconvenient. It's the\nremaining escape hatch if the genetic query optimizer does not suffice.\n",
"msg_date": "Sun, 13 May 2012 18:00:53 -0400",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "I've encoutered similar issues myself (with UNION so far), so I tried\nto build a simple test case, which may or may not cover Clemens's\ncase.\n\nTest case 1 and 2 illustrates the issue, and case 3-9 are variations.\n\nMy observation: Looks like the optimizer cannot be close friends with\nboth UNION and IN/JOIN at the same time.\nActually - it looks like the UNION SELECT kids don't wanna share the\nIN/JOIN toy we gave them, but are happy when they get their own toys\nto play with ;)\n\n\nDROP TABLE IF EXISTS table1;\nCREATE TABLE table1 AS SELECT i AS id FROM generate_series(1, 300000)\nS(i);\nCREATE INDEX ON table1(id);\nANALYZE table1;\n\n\n-- Test 1: Slow. IN()\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q WHERE id IN (SELECT id FROM table1 LIMIT 10);\n\n\n-- Test 2: Fast. ANY(ARRAY())\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q WHERE id = ANY(ARRAY(SELECT id FROM table1 LIMIT 10));\n\n\n\n\n\n\n-- Test 3: Fast. Duplicate IN. Symptom fix? Or would you call it a\n\"better\" query in terms of sql? -except for the unnecessary subquery,\nwhich I kept for readability.\nSELECT * FROM (\n\tSELECT * FROM table1\n\tWHERE id IN (SELECT id FROM table1 LIMIT 10)\n\tUNION\n\tSELECT * FROM table1\n\tWHERE id IN (SELECT id FROM table1 LIMIT 10)\n) Q;\n\n\n-- Test 4: Fast. Duplicate JOIN CTE.\nWITH id_list AS (SELECT id FROM table1 LIMIT 10)\nSELECT * FROM (\n SELECT * FROM table1 JOIN id_list USING(id)\n UNION\n SELECT * FROM table1 JOIN id_list USING(id)\n) Q;\n\n\n-- Test 5: Slow. IN(CTE)\nWITH id_list AS (SELECT id FROM table1 LIMIT 10)\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q WHERE id IN (SELECT * FROM id_list);\n\n\n-- Test 6: Slow. IN(explicit id list)\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q WHERE id IN (SELECT\nUNNEST('{100001,100002,100003,100004,100005,100006,100007,100008,100009,10010}'::BIGINT[] )\nAS id);\n\n\n-- Test 7: Slow. IN(UNNEST(ARRAY())\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q WHERE id IN (SELECT UNNEST(ARRAY(SELECT id FROM table1 LIMIT 10))\nAS id);\n\n\n-- Test 8: Slow. JOIN CTE\nWITH id_list AS (SELECT id FROM table1 LIMIT 10)\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q JOIN id_list USING(id);\n\n\n-- Test 9: Fast. JOIN CTE + UNION ALL/DISTINCT (not quite the same\nquery)\nWITH id_list AS (SELECT id FROM table1 LIMIT 10)\nSELECT DISTINCT * FROM (\n SELECT * FROM table1\n UNION ALL\n SELECT * FROM table1\n) Q JOIN id_list USING(id);\n\n\n--\nGeir Bostad\n9.1.3(x64,win)\n",
"msg_date": "Wed, 23 May 2012 05:07:04 -0700 (PDT)",
"msg_from": "geirB <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
},
{
"msg_contents": "Ah, forgot one query:\nWHERE IN is of course fast when we supply id's directly, but not when\nthey are wrapped as array and UNNEST'ed in query 6. (previous post\nfrom me)\n\n-- Test 6b: Fast. WHERE IN(explicit id list)\nSELECT * FROM (\n SELECT * FROM table1\n UNION\n SELECT * FROM table1\n) Q WHERE id IN\n(100001,100002,100003,100004,100005,100006,100007,100008,100009,10010);\n\n--\nGeir Bostad\n9.1.3(x64,win)\n\n",
"msg_date": "Wed, 23 May 2012 05:23:01 -0700 (PDT)",
"msg_from": "geirB <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any disadvantages of using =ANY(ARRAY()) instead of IN?"
}
] |
[
{
"msg_contents": "Hi,\n\nwe want to see if we can gain better performance with our postgresql\ndatabase. In the last year the amount of data growed from ~25G to now\n~140G and we're currently developing a new feature that needs to get\ndata faster from the database. The system is both read and write heavy.\n\nAt first I want to give you an overview over the hardware, software and\nconfiguration and the changes that I see we could check out. I'd be very\nhappy if you could review and tell if the one or the other is nonsense.\n\nHardware:\n- CPU: 4x4 Cores Intel Xeon L5630 @ 2.13GHz\n- RAM: 64GB\n- RAID 1 (1+0) HP Company Smart Array G6 controllers, P410i\n (I don't know the actual number of discs)\n- A single partition for data and wal-files\n\nSoftware\n- RHEL 6, Kernel 2.6.32-220.4.1.el6.x86_64\n- postgresql90-server-9.0.6-1PGDG.rhel6.x86_64\n\nConfiguration (selected from settings)\n------------------------------+-----------+--------+-------------------\n name | setting | unit | source\n------------------------------+-----------+--------+-------------------\n autovacuum | on | [NULL] | configuration file\n checkpoint_completion_target | 0.5 | [NULL] | default\n checkpoint_segments | 16 | | configuration file\n checkpoint_timeout | 300 | s | default\n commit_delay | 0 | | default\n default_statistics_target | 100 | | default\n effective_cache_size | 16384 | 8kB | default\n fsync | on | [NULL] | default\n log_min_duration_statement | 250 | ms | configuration file\n log_temp_files | -1 | kB | default\n maintenance_work_mem | 16384 | kB | default\n max_connections | 2000 | | configuration file\n random_page_cost | 4 | [NULL] | default\n shared_buffers | 1310720 | 8kB | configuration file\n synchronous_commit | on | [NULL] | default\n wal_buffers | 256 | 8kB | configuration file\n wal_sync_method | fdatasync | [NULL] | default\n wal_writer_delay | 200 | ms | default\n work_mem | 1024 | kB | default\n------------------------------+-----------+--------+-------------------\n\nSome stats:\n$ free -m\n total used free shared buffers cached\nMem: 64413 63764 649 0 37 60577\n-/+ buffers/cache: 3148 61264\nSwap: 8191 333 7858\n\niostat shows nearly all the time ~100% io utilization of the disc\nserving the pg data / wal files.\n\nI'd suggest the following changes:\n\n(Improve query planning)\n1) Increase effective_cache_size to 48GB\n2) Increase work_mem to 10MB (alternatively first activate\nlog_temp_files to see if this is really needed\n3) Reduce random_page_cost to 1\n\n(WAL / I/O)\n4) Set synchronous_commit=off\n5) Increase checkpoint_segments to 32\n6) Increase wal_buffers to 16M\n7) Add new discs (RAID) for wal files / pg_xlog\n\n(Misc)\n8) Increase maintainance_work_mem to 1GB\n\nIn parallel I'd review statistics like long running queries, index usage\n(which ones can be dropped) etc.\n\nAt first I'd like to try out 1) to 3) as they affect the query planner,\nso that some indices that are not used right now might be used then.\n\nAfter this change I'd review index usage and clean up those / improve\nqueries.\n\nThen, finally I'd test WAL / I/O related changes.\n\nDo you think this makes sense? Do you see other improvements, or do you\nneed some more information?\n\nThanx in advance,\ncheers,\nMartin",
"msg_date": "Wed, 02 May 2012 15:19:00 +0200",
"msg_from": "Martin Grotzke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Several optimization options (config/hardware)"
},
{
"msg_contents": "Hi,\n\nOn 2 Květen 2012, 15:19, Martin Grotzke wrote:\n> Hi,\n>\n> we want to see if we can gain better performance with our postgresql\n> database. In the last year the amount of data growed from ~25G to now\n> ~140G and we're currently developing a new feature that needs to get\n> data faster from the database. The system is both read and write heavy.\n\nWhat does the read/write heavy mean? How much data / transactions you need\nto handle, how many clients, etc.?\n\n> At first I want to give you an overview over the hardware, software and\n> configuration and the changes that I see we could check out. I'd be very\n> happy if you could review and tell if the one or the other is nonsense.\n>\n> Hardware:\n> - CPU: 4x4 Cores Intel Xeon L5630 @ 2.13GHz\n> - RAM: 64GB\n> - RAID 1 (1+0) HP Company Smart Array G6 controllers, P410i\n> (I don't know the actual number of discs)\n> - A single partition for data and wal-files\n\nHave you done any benchmarks with that hardware, to verify the\nperformance? Can you do that now (i.e. stopping the database so that you\ncan run them)?\n\n>\n> Software\n> - RHEL 6, Kernel 2.6.32-220.4.1.el6.x86_64\n> - postgresql90-server-9.0.6-1PGDG.rhel6.x86_64\n>\n> Configuration (selected from settings)\n> ------------------------------+-----------+--------+-------------------\n> name | setting | unit | source\n> ------------------------------+-----------+--------+-------------------\n> autovacuum | on | [NULL] | configuration file\n> checkpoint_completion_target | 0.5 | [NULL] | default\n> checkpoint_segments | 16 | | configuration file\n> checkpoint_timeout | 300 | s | default\n> commit_delay | 0 | | default\n> default_statistics_target | 100 | | default\n> effective_cache_size | 16384 | 8kB | default\n> fsync | on | [NULL] | default\n> log_min_duration_statement | 250 | ms | configuration file\n> log_temp_files | -1 | kB | default\n> maintenance_work_mem | 16384 | kB | default\n> max_connections | 2000 | | configuration file\n> random_page_cost | 4 | [NULL] | default\n> shared_buffers | 1310720 | 8kB | configuration file\n> synchronous_commit | on | [NULL] | default\n> wal_buffers | 256 | 8kB | configuration file\n> wal_sync_method | fdatasync | [NULL] | default\n> wal_writer_delay | 200 | ms | default\n> work_mem | 1024 | kB | default\n> ------------------------------+-----------+--------+-------------------\n>\n> Some stats:\n> $ free -m\n> total used free shared buffers cached\n> Mem: 64413 63764 649 0 37 60577\n> -/+ buffers/cache: 3148 61264\n> Swap: 8191 333 7858\n>\n> iostat shows nearly all the time ~100% io utilization of the disc\n> serving the pg data / wal files.\n\nThat's rather useless value, especially if you don't know details about\nthe RAID array. With multiple spindles, the array may be 100% utilized\n(ratio of time it spent servicing requests) yet it may absorb more.\nImagine a RAID with 2 drives, each 50% utilized. The array may report 100%\nutilization yet it's actually 50% utilized ...\n\n>\n> I'd suggest the following changes:\n>\n> (Improve query planning)\n> 1) Increase effective_cache_size to 48GB\n> 2) Increase work_mem to 10MB (alternatively first activate\n> log_temp_files to see if this is really needed\n> 3) Reduce random_page_cost to 1\n>\n> (WAL / I/O)\n> 4) Set synchronous_commit=off\n> 5) Increase checkpoint_segments to 32\n> 6) Increase wal_buffers to 16M\n> 7) Add new discs (RAID) for wal files / pg_xlog\n>\n> (Misc)\n> 8) Increase maintainance_work_mem to 1GB\n>\n> In parallel I'd review statistics like long running queries, index usage\n> (which ones can be dropped) etc.\n\nReviewing long-running stats queries is a good starting point - you need\nto find out where the bottleneck is (I/O, CPU, ...) and this may be\nhelpful.\n\nDropping unused indexes is quite difficult - most of the time I see the\ncase with multiple similar indexes, all of them are used but it's possible\nto remove some of them with minimal performance impact.\n\n> At first I'd like to try out 1) to 3) as they affect the query planner,\n> so that some indices that are not used right now might be used then.\n\nIf you don't know where the issue is, it's difficult to give any advices.\nBut in general, I'd say this\n\n1) setting effective_cache_size to 48G - seems like a good idea, better\nmatch for your environment\n\n2) increasing work_mem - might help, but you should check the slow queries\nfirst (enabling log_temp_files is a good idea)\n\n3) setting random_page_cost is a really bad idea IMHO, especially with\nspinners, rather weak controller and unknown details about the array\n\nSo do (1), maybe (2) and I'd definitely vote against (3).\n\nRegarding the other options:\n\n4) synchronous_commit=off - well, this may improve the performance, but it\nwon't fix the underlying issues and it may introduce other\napplication-level issues (expecting the transaction to be committed etc.)\n\n5) Increase checkpoint_segments to 32 - Do you see a lot of\ncheckpoint-related warnings in the log? If not, this probably won't fix\nanything. If you actually do have issues with checkpoints, I'd recommend\nincreasing the default checkpoint timeout (eg. to 30 minutes),\nsignificantly increasing the number of segments (e.g. to 64 or more) and\ntuning the completion target (e.g. to 0.9).\n\n6) Increase wal_buffers to 16M - may help, but I would not expect a\ntremendous improvement.\n\n7) Add new discs (RAID) for wal files / pg_xlog - good idea, moving those\nto a separate spindles may help a lot.\n\n> After this change I'd review index usage and clean up those / improve\n> queries.\n>\n> Then, finally I'd test WAL / I/O related changes.\n\nWhy do you want to do this last? Chances are that writes are causing many\nof the I/O issues (because it needs to actually fsync the data). Tuning\nthis will improve the general I/O performance etc.\n\n> Do you think this makes sense? Do you see other improvements, or do you\n> need some more information?\n\nFirst of all, find out more about the RAID array. Do some basic I/O tests\n(with dd etc.).\n\nMoreover, I've noticed you do have max_connections=2000. That's insanely\nhigh in most cases, unless you're using commit_delay/commit_siblings. A\nreasonable value is usually something like \"num of cpus + num of drives\"\nalthough that's just a rough estimate. But given that you have 16 cores,\nI'd expect ~100 or something like that. If you need more, I'd recommend a\npooler (e.g. pgpool).\n\nTomas\n\n",
"msg_date": "Wed, 2 May 2012 16:57:10 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Several optimization options (config/hardware)"
},
{
"msg_contents": "Martin Grotzke wrote:\n> we want to see if we can gain better performance with our postgresql\n> database. In the last year the amount of data growed from ~25G to now\n> ~140G and we're currently developing a new feature that needs to get\n> data faster from the database. The system is both read and write\nheavy.\n> \n> At first I want to give you an overview over the hardware, software\nand\n> configuration and the changes that I see we could check out. I'd be\nvery\n> happy if you could review and tell if the one or the other is\nnonsense.\n> \n> Hardware:\n> - CPU: 4x4 Cores Intel Xeon L5630 @ 2.13GHz\n> - RAM: 64GB\n> - RAID 1 (1+0) HP Company Smart Array G6 controllers, P410i\n> (I don't know the actual number of discs)\n> - A single partition for data and wal-files\n> \n> Software\n> - RHEL 6, Kernel 2.6.32-220.4.1.el6.x86_64\n> - postgresql90-server-9.0.6-1PGDG.rhel6.x86_64\n\nYou could try different kernel I/O elevators and see if that improves\nsomething.\n\nI have made good experiences with elevator=deadline and elevator=noop.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Thu, 3 May 2012 09:26:40 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Several optimization options (config/hardware)"
},
{
"msg_contents": "Hi Laurenz,\n\nOn 05/03/2012 09:26 AM, Albe Laurenz wrote:\n> Martin Grotzke wrote:\n>> we want to see if we can gain better performance with our postgresql\n>> database. In the last year the amount of data growed from ~25G to now\n>> ~140G and we're currently developing a new feature that needs to get\n>> data faster from the database. The system is both read and write\n> heavy.\n>>\n>> At first I want to give you an overview over the hardware, software\n> and\n>> configuration and the changes that I see we could check out. I'd be\n> very\n>> happy if you could review and tell if the one or the other is\n> nonsense.\n>>\n>> Hardware:\n>> - CPU: 4x4 Cores Intel Xeon L5630 @ 2.13GHz\n>> - RAM: 64GB\n>> - RAID 1 (1+0) HP Company Smart Array G6 controllers, P410i\n>> (I don't know the actual number of discs)\n>> - A single partition for data and wal-files\n>>\n>> Software\n>> - RHEL 6, Kernel 2.6.32-220.4.1.el6.x86_64\n>> - postgresql90-server-9.0.6-1PGDG.rhel6.x86_64\n> \n> You could try different kernel I/O elevators and see if that improves\n> something.\n> \n> I have made good experiences with elevator=deadline and elevator=noop.\nOk, great info.\n\nI'm not sure at which device to look honestly to check the current\nconfiguration.\n\nmount/fstab shows the device /dev/mapper/VG01-www for the relevant\npartition. When I check iostat high utilization is reported for the\ndevices dm-4 and sda (showing nearly the same numbers for util always),\nso I suspect that dm-4 is mapped on sda.\n\nThis is the current config:\n$ cat /sys/block/sda/queue/scheduler\nnoop anticipatory deadline [cfq]\n$ cat /sys/block/dm-4/queue/scheduler\nnone\n\nWhich of them should be changed?\nI'll discuss this also with our hosting provider next week, he'll know\nwhat has to be done.\n\nCheers,\nMartin",
"msg_date": "Thu, 03 May 2012 15:01:56 +0200",
"msg_from": "Martin Grotzke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Several optimization options (config/hardware)"
},
{
"msg_contents": "Martin Grotzke wrote:\n>> You could try different kernel I/O elevators and see if that improves\n>> something.\n>>\n>> I have made good experiences with elevator=deadline and\nelevator=noop.\n\n> Ok, great info.\n> \n> I'm not sure at which device to look honestly to check the current\n> configuration.\n> \n> mount/fstab shows the device /dev/mapper/VG01-www for the relevant\n> partition. When I check iostat high utilization is reported for the\n> devices dm-4 and sda (showing nearly the same numbers for util\nalways),\n> so I suspect that dm-4 is mapped on sda.\n\nUse the option -N of \"iostat\" to see long device names.\nYou can use \"lvm\" to figure out the mapping.\n\n> This is the current config:\n> $ cat /sys/block/sda/queue/scheduler\n> noop anticipatory deadline [cfq]\n> $ cat /sys/block/dm-4/queue/scheduler\n> none\n\nDo you mean literal \"none\" or do you mean that the file is empty?\n\n> Which of them should be changed?\n> I'll discuss this also with our hosting provider next week, he'll know\n> what has to be done.\n\nI'd just add \"elevator=deadline\" to the kernel line in /etc/grub.conf\nand reboot. At least if it is a dedicated database machine.\n\nBut of course you want to change it on the fly first to test - not\nknowing\nthe answer to your question, I would change it in both devices if I can.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 4 May 2012 09:57:20 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Several optimization options (config/hardware)"
},
{
"msg_contents": "On 05/04/2012 09:57 AM, Albe Laurenz wrote:\n> Martin Grotzke wrote:\n>>> You could try different kernel I/O elevators and see if that\n>>> improves something.\n>>> \n>>> I have made good experiences with elevator=deadline and\n>>> elevator=noop.\n> \n>> Ok, great info.\n>> \n>> I'm not sure at which device to look honestly to check the current \n>> configuration.\n>> \n>> mount/fstab shows the device /dev/mapper/VG01-www for the relevant \n>> partition. When I check iostat high utilization is reported for\n>> the devices dm-4 and sda (showing nearly the same numbers for util\n>> always),\n>> so I suspect that dm-4 is mapped on sda.\n> \n> Use the option -N of \"iostat\" to see long device names. You can use\n> \"lvm\" to figure out the mapping.\niostat with -N shows VG01-www for dm-4. For lvm/lvdisplay/dmsetup I get\n\"Permission denied\" as I have no root/sudo permissions. I need to check\nthis with our hosting provider (hopefully we have a call today).\n\n\n>> This is the current config: $ cat /sys/block/sda/queue/scheduler \n>> noop anticipatory deadline [cfq] $ cat\n>> /sys/block/dm-4/queue/scheduler none\n> \n> Do you mean literal \"none\" or do you mean that the file is empty?\n\"none\" was the output of `cat /sys/block/dm-4/queue/scheduler`.\n\n\n>> Which of them should be changed? I'll discuss this also with our\n>> hosting provider next week, he'll know what has to be done.\n> \n> I'd just add \"elevator=deadline\" to the kernel line in\n> /etc/grub.conf and reboot. At least if it is a dedicated database\n> machine.\n> \n> But of course you want to change it on the fly first to test - not \n> knowing the answer to your question, I would change it in both\n> devices if I can.\nOk, makes sense.\n\nCheers,\nMartin",
"msg_date": "Mon, 07 May 2012 11:37:05 +0200",
"msg_from": "Martin Grotzke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Several optimization options (config/hardware)"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm fairly new to PostgreSQL query tuning, so please forgive me if I've overlooked something obvious.\n\nI have a query which is running slowly, and the query plan shows an unexpected sequence scan where I'd have expected the planner to use an index. Setting enable_seqscan=off causes the planner to use the index as expected. The types of the field that the join where the index should be used do indeed match (int4) - I've read that's the usual reason for an index not being used. I've tried rebuilding the twitter_tweet_transmission_id index, and have re-ANALYSE'd the table.\n\nI can use the enable_seqscan=off workaround for now, but I'd be interested in knowing why the index isn't being used.\n\nProduction is PostgreSQL 8.4 on Ubuntu 10.04 LTS, 16G RAM. I get a broadly similar query plan both on a local dev machine running PG 9.1 with 8G RAM and single SATA drive, and on the production server with 16G RAM and a RAID10 array (some of the costs etc. are different, but the use or absence of the index is common). I'm using a restored production database on dev, so data volumes are the same.\n\nHere's the query, table and index definitions, query plans, and server configs where the defaults have changed (query plans and config from the dev machine) - I've only really done rudimentary tuning with pgtune, so there are probably some howlers in the conf. Anyway:\n\nSELECT \"timecode_transmission\".\"id\",\n COUNT(\"twitter_tweet\".\"id\") AS \"tweet_count\"\nFROM \"timecode_transmission\"\nLEFT OUTER JOIN \"twitter_tweet\" ON (\"timecode_transmission\".\"id\" = \"twitter_tweet\".\"transmission_id\")\nWHERE \"timecode_transmission\".\"tx\" <= '2012-04-06 23:59:59'\n AND \"timecode_transmission\".\"tx\" >= '2012-04-06 00:00:00'\nGROUP BY \"timecode_transmission\".\"id\"\n\nThe twitter_tweet table has about 25m rows, as you'll see from the query plans.\n\nAny hints are appreciated.\n\nCheers,\nDan\n\n\nTable definitions:\n\nCREATE TABLE \"public\".\"twitter_tweet\" (\n\t\"id\" int8 NOT NULL DEFAULT nextval('twitter_tweet_id_seq'::regclass),\n\t\"twitter_id\" int8 NOT NULL,\n\t\"created_at\" timestamp(6) WITH TIME ZONE NOT NULL,\n\t\"from_user\" varchar(20) NOT NULL,\n\t\"text\" text NOT NULL,\n\t\"from_user_id\" int8 NOT NULL,\n\t\"sentiment\" float8 NOT NULL,\n\t\"transmission_id\" int4,\n\t\"from_user_name\" varchar(500),\n\t\"gender\" int2,\n\tCONSTRAINT \"twitter_tweet_pkey\" PRIMARY KEY (\"id\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"transmission_id_refs_id_23b9da6852fe9f37\" FOREIGN KEY (\"transmission_id\") REFERENCES \"public\".\"timecode_transmission\" (\"id\") ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ,\n\tCONSTRAINT \"ck_gender_pstv_4bc0eb22f3ec191e\" CHECK ((gender >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"twitter_tweet_gender_check\" CHECK ((gender >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"twitter_tweet_klout_score_check\" CHECK ((klout_score >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE\n)\nWITH (OIDS=FALSE);\nCREATE INDEX \"twitter_tweet_created_at\" ON \"public\".\"twitter_tweet\" USING btree(created_at ASC NULLS LAST);\nCREATE INDEX \"twitter_tweet_transmission_id\" ON \"public\".\"twitter_tweet\" USING btree(transmission_id ASC NULLS LAST);\n\nCREATE TABLE \"public\".\"timecode_transmission\" (\n\t\"id\" int4 NOT NULL DEFAULT nextval('timecode_transmission_id_seq'::regclass),\n\t\"episode_id\" int4 NOT NULL,\n\t\"channel_id\" int4 NOT NULL,\n\t\"tx\" timestamp(6) WITH TIME ZONE NOT NULL,\n\t\"status\" varchar(100) NOT NULL,\n\t\"end\" timestamp(6) WITH TIME ZONE NOT NULL,\n\t\"duration\" int4,\n\t\"include_in_listings\" bool NOT NULL,\n\tCONSTRAINT \"timecode_transmission_pkey\" PRIMARY KEY (\"id\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"channel_id_refs_id_42fae8846ea37b15\" FOREIGN KEY (\"channel_id\") REFERENCES \"public\".\"timecode_channel\" (\"id\") ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ,\n\tCONSTRAINT \"episode_id_refs_id_52ab388b54a13ff3\" FOREIGN KEY (\"episode_id\") REFERENCES \"public\".\"timecode_episode\" (\"id\") ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ,\n\tCONSTRAINT \"timecode_transmission_tx_72eeb3dac42e185_uniq\" UNIQUE (\"tx\", \"channel_id\") NOT DEFERRABLE INITIALLY IMMEDIATE,\n\tCONSTRAINT \"timecode_transmission_duration_check\" CHECK ((duration >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE\n)\nWITH (OIDS=FALSE);\nCREATE INDEX \"timecode_transmission_channel_id\" ON \"public\".\"timecode_transmission\" USING btree(channel_id ASC NULLS LAST);\nCREATE INDEX \"timecode_transmission_episode_id\" ON \"public\".\"timecode_transmission\" USING btree(episode_id ASC NULLS LAST);\nCREATE INDEX \"timecode_transmission_status\" ON \"public\".\"timecode_transmission\" USING btree(status ASC NULLS LAST);\nCREATE INDEX \"timecode_transmission_tx\" ON \"public\".\"timecode_transmission\" USING btree(tx ASC NULLS LAST);\n\nQuery plans (PG 9.1, dev):\n\nWith enable_seqscan=on:\nHashAggregate (cost=3722291.37..3722301.78 rows=1041 width=12) (actual time=255056.070..255056.167 rows=1074 loops=1)\n -> Hash Right Join (cost=68.20..3721927.54 rows=72766 width=12) (actual time=229054.781..255005.491 rows=415193 loops=1)\n Hash Cond: (twitter_tweet.transmission_id = timecode_transmission.id)\n -> Seq Scan on twitter_tweet (cost=0.00..3628135.86 rows=24798886 width=12) (actual time=0.003..251157.607 rows=24799190 loops=1)\n -> Hash (cost=55.18..55.18 rows=1041 width=4) (actual time=0.922..0.922 rows=1074 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Index Scan using timecode_transmission_tx on timecode_transmission (cost=0.00..55.18 rows=1041 width=4) (actual time=0.023..0.659 rows=1074 loops=1)\n Index Cond: ((tx <= '2012-04-06 23:59:59+01'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+01'::timestamp with time zone))\nTotal runtime: 255083.009 ms\n\nWith enable_seqscan=off:\nGroupAggregate (cost=0.00..5972744.82 rows=1041 width=12) (actual time=63.504..272.790 rows=1074 loops=1)\n -> Nested Loop Left Join (cost=0.00..5972370.58 rows=72766 width=12) (actual time=63.498..244.115 rows=415193 loops=1)\n -> Index Scan using timecode_transmission_pkey on timecode_transmission (cost=0.00..13520.56 rows=1041 width=4) (actual time=63.486..68.130 rows=1074 loops=1)\n Filter: ((tx <= '2012-04-06 23:59:59+01'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+01'::timestamp with time zone))\n -> Index Scan using twitter_tweet_transmission_id on twitter_tweet (cost=0.00..5677.78 rows=3710 width=12) (actual time=0.002..0.130 rows=386 loops=1074)\n Index Cond: (timecode_transmission.id = transmission_id)\nTotal runtime: 272.871 ms\n\nQuery plans (PG 8.4, prod):\n\nWith enable_seqscan=on:\nHashAggregate (cost=4931753.82..4931766.82 rows=1040 width=12) (actual time=130720.665..130720.873 rows=1071 loops=1)\n -> Hash Left Join (cost=4205000.99..4931360.42 rows=78679 width=12) (actual time=126841.549..130645.028 rows=421286 loops=1)\n Hash Cond: (timecode_transmission.id = twitter_tweet.transmission_id)\n -> Index Scan using timecode_transmission_tx on timecode_transmission (cost=0.00..57.25 rows=1040 width=4) (actual time=8.730..19.873 rows=1071 loops=1)\n Index Cond: ((tx <= '2012-04-06 23:59:59+00'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+00'::timestamp with time zone))\n -> Hash (cost=3736125.66..3736125.66 rows=26973466 width=12) (actual time=126824.846..126824.846 rows=23224197 loops=1)\n -> Seq Scan on twitter_tweet (cost=0.00..3736125.66 rows=26973466 width=12) (actual time=8.084..117210.707 rows=25165171 loops=1)\nTotal runtime: 130729.920 ms\n\nWith enable_seqscan=off;\nGroupAggregate (cost=0.00..6117044.05 rows=1040 width=12) (actual time=72.242..470.289 rows=1071 loops=1)\n -> Nested Loop Left Join (cost=0.00..6116637.65 rows=78681 width=12) (actual time=72.180..426.434 rows=421286 loops=1)\n -> Index Scan using timecode_transmission_pkey on timecode_transmission (cost=0.00..13610.82 rows=1040 width=4) (actual time=72.164..78.575 rows=1071 loops=1)\n Filter: ((tx <= '2012-04-06 23:59:59+00'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+00'::timestamp with time zone))\n -> Index Scan using twitter_tweet_transmission_id on twitter_tweet (cost=0.00..5816.73 rows=4125 width=12) (actual time=0.003..0.277 rows=393 loops=1071)\n Index Cond: (timecode_transmission.id = twitter_tweet.transmission_id)\nTotal runtime: 470.421 ms\n\nNon-default configuration (dev, PG9.1):\ndefault_statistics_target = 100 # pgtune wizard 2012-01-11\nmaintenance_work_mem = 400MB # pgtune wizard 2012-01-11\nconstraint_exclusion = partition # pgtune wizard 2012-01-11\ncheckpoint_completion_target = 0.9 # pgtune wizard 2012-01-11\neffective_cache_size = 7552MB # pgtune wizard 2012-01-11\nwork_mem = 50MB # pgtune wizard 2012-01-11\nwal_buffers = 16MB # pgtune wizard 2012-01-11\ncheckpoint_segments = 16 # pgtune wizard 2012-01-11\nshared_buffers = 3840MB # pgtune wizard 2012-01-11\nmax_connections = 80 # pgtune wizard 2012-01-11\n\nNon-default configuration (prod, PG8.4)\nmaintenance_work_mem = 960MB\ncheckpoint_completion_target = 0.9\ncheckpoint_segments = 32\nwal_buffers = 16MB\neffective_cache_size = 11GB\nshared_buffers = 4GB\nmax_connections = 32\nwork_mem = 100MB\n\n\n--\nDan Fairs | [email protected] | www.fezconsulting.com\n\n\nHi,I'm fairly new to PostgreSQL query tuning, so please forgive me if I've overlooked something obvious.I have a query which is running slowly, and the query plan shows an unexpected sequence scan where I'd have expected the planner to use an index. Setting enable_seqscan=off causes the planner to use the index as expected. The types of the field that the join where the index should be used do indeed match (int4) - I've read that's the usual reason for an index not being used. I've tried rebuilding the twitter_tweet_transmission_id index, and have re-ANALYSE'd the table.I can use the enable_seqscan=off workaround for now, but I'd be interested in knowing why the index isn't being used.Production is PostgreSQL 8.4 on Ubuntu 10.04 LTS, 16G RAM. I get a broadly similar query plan both on a local dev machine running PG 9.1 with 8G RAM and single SATA drive, and on the production server with 16G RAM and a RAID10 array (some of the costs etc. are different, but the use or absence of the index is common). I'm using a restored production database on dev, so data volumes are the same.Here's the query, table and index definitions, query plans, and server configs where the defaults have changed (query plans and config from the dev machine) - I've only really done rudimentary tuning with pgtune, so there are probably some howlers in the conf. Anyway:SELECT \"timecode_transmission\".\"id\", COUNT(\"twitter_tweet\".\"id\") AS \"tweet_count\"FROM \"timecode_transmission\"LEFT OUTER JOIN \"twitter_tweet\" ON (\"timecode_transmission\".\"id\" = \"twitter_tweet\".\"transmission_id\")WHERE \"timecode_transmission\".\"tx\" <= '2012-04-06 23:59:59' AND \"timecode_transmission\".\"tx\" >= '2012-04-06 00:00:00'GROUP BY \"timecode_transmission\".\"id\"The twitter_tweet table has about 25m rows, as you'll see from the query plans.Any hints are appreciated.Cheers,DanTable definitions:CREATE TABLE \"public\".\"twitter_tweet\" ( \"id\" int8 NOT NULL DEFAULT nextval('twitter_tweet_id_seq'::regclass), \"twitter_id\" int8 NOT NULL, \"created_at\" timestamp(6) WITH TIME ZONE NOT NULL, \"from_user\" varchar(20) NOT NULL, \"text\" text NOT NULL, \"from_user_id\" int8 NOT NULL, \"sentiment\" float8 NOT NULL, \"transmission_id\" int4, \"from_user_name\" varchar(500), \"gender\" int2, CONSTRAINT \"twitter_tweet_pkey\" PRIMARY KEY (\"id\") NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT \"transmission_id_refs_id_23b9da6852fe9f37\" FOREIGN KEY (\"transmission_id\") REFERENCES \"public\".\"timecode_transmission\" (\"id\") ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED , CONSTRAINT \"ck_gender_pstv_4bc0eb22f3ec191e\" CHECK ((gender >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT \"twitter_tweet_gender_check\" CHECK ((gender >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT \"twitter_tweet_klout_score_check\" CHECK ((klout_score >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE)WITH (OIDS=FALSE);CREATE INDEX \"twitter_tweet_created_at\" ON \"public\".\"twitter_tweet\" USING btree(created_at ASC NULLS LAST);CREATE INDEX \"twitter_tweet_transmission_id\" ON \"public\".\"twitter_tweet\" USING btree(transmission_id ASC NULLS LAST);CREATE TABLE \"public\".\"timecode_transmission\" ( \"id\" int4 NOT NULL DEFAULT nextval('timecode_transmission_id_seq'::regclass), \"episode_id\" int4 NOT NULL, \"channel_id\" int4 NOT NULL, \"tx\" timestamp(6) WITH TIME ZONE NOT NULL, \"status\" varchar(100) NOT NULL, \"end\" timestamp(6) WITH TIME ZONE NOT NULL, \"duration\" int4, \"include_in_listings\" bool NOT NULL, CONSTRAINT \"timecode_transmission_pkey\" PRIMARY KEY (\"id\") NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT \"channel_id_refs_id_42fae8846ea37b15\" FOREIGN KEY (\"channel_id\") REFERENCES \"public\".\"timecode_channel\" (\"id\") ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED , CONSTRAINT \"episode_id_refs_id_52ab388b54a13ff3\" FOREIGN KEY (\"episode_id\") REFERENCES \"public\".\"timecode_episode\" (\"id\") ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED , CONSTRAINT \"timecode_transmission_tx_72eeb3dac42e185_uniq\" UNIQUE (\"tx\", \"channel_id\") NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT \"timecode_transmission_duration_check\" CHECK ((duration >= 0)) NOT DEFERRABLE INITIALLY IMMEDIATE)WITH (OIDS=FALSE);CREATE INDEX \"timecode_transmission_channel_id\" ON \"public\".\"timecode_transmission\" USING btree(channel_id ASC NULLS LAST);CREATE INDEX \"timecode_transmission_episode_id\" ON \"public\".\"timecode_transmission\" USING btree(episode_id ASC NULLS LAST);CREATE INDEX \"timecode_transmission_status\" ON \"public\".\"timecode_transmission\" USING btree(status ASC NULLS LAST);CREATE INDEX \"timecode_transmission_tx\" ON \"public\".\"timecode_transmission\" USING btree(tx ASC NULLS LAST);Query plans (PG 9.1, dev):With enable_seqscan=on:HashAggregate (cost=3722291.37..3722301.78 rows=1041 width=12) (actual time=255056.070..255056.167 rows=1074 loops=1) -> Hash Right Join (cost=68.20..3721927.54 rows=72766 width=12) (actual time=229054.781..255005.491 rows=415193 loops=1) Hash Cond: (twitter_tweet.transmission_id = timecode_transmission.id) -> Seq Scan on twitter_tweet (cost=0.00..3628135.86 rows=24798886 width=12) (actual time=0.003..251157.607 rows=24799190 loops=1) -> Hash (cost=55.18..55.18 rows=1041 width=4) (actual time=0.922..0.922 rows=1074 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 38kB -> Index Scan using timecode_transmission_tx on timecode_transmission (cost=0.00..55.18 rows=1041 width=4) (actual time=0.023..0.659 rows=1074 loops=1) Index Cond: ((tx <= '2012-04-06 23:59:59+01'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+01'::timestamp with time zone))Total runtime: 255083.009 msWith enable_seqscan=off:GroupAggregate (cost=0.00..5972744.82 rows=1041 width=12) (actual time=63.504..272.790 rows=1074 loops=1) -> Nested Loop Left Join (cost=0.00..5972370.58 rows=72766 width=12) (actual time=63.498..244.115 rows=415193 loops=1) -> Index Scan using timecode_transmission_pkey on timecode_transmission (cost=0.00..13520.56 rows=1041 width=4) (actual time=63.486..68.130 rows=1074 loops=1) Filter: ((tx <= '2012-04-06 23:59:59+01'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+01'::timestamp with time zone)) -> Index Scan using twitter_tweet_transmission_id on twitter_tweet (cost=0.00..5677.78 rows=3710 width=12) (actual time=0.002..0.130 rows=386 loops=1074) Index Cond: (timecode_transmission.id = transmission_id)Total runtime: 272.871 msQuery plans (PG 8.4, prod):With enable_seqscan=on:HashAggregate (cost=4931753.82..4931766.82 rows=1040 width=12) (actual time=130720.665..130720.873 rows=1071 loops=1) -> Hash Left Join (cost=4205000.99..4931360.42 rows=78679 width=12) (actual time=126841.549..130645.028 rows=421286 loops=1) Hash Cond: (timecode_transmission.id = twitter_tweet.transmission_id) -> Index Scan using timecode_transmission_tx on timecode_transmission (cost=0.00..57.25 rows=1040 width=4) (actual time=8.730..19.873 rows=1071 loops=1) Index Cond: ((tx <= '2012-04-06 23:59:59+00'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+00'::timestamp with time zone)) -> Hash (cost=3736125.66..3736125.66 rows=26973466 width=12) (actual time=126824.846..126824.846 rows=23224197 loops=1) -> Seq Scan on twitter_tweet (cost=0.00..3736125.66 rows=26973466 width=12) (actual time=8.084..117210.707 rows=25165171 loops=1)Total runtime: 130729.920 msWith enable_seqscan=off;GroupAggregate (cost=0.00..6117044.05 rows=1040 width=12) (actual time=72.242..470.289 rows=1071 loops=1) -> Nested Loop Left Join (cost=0.00..6116637.65 rows=78681 width=12) (actual time=72.180..426.434 rows=421286 loops=1) -> Index Scan using timecode_transmission_pkey on timecode_transmission (cost=0.00..13610.82 rows=1040 width=4) (actual time=72.164..78.575 rows=1071 loops=1) Filter: ((tx <= '2012-04-06 23:59:59+00'::timestamp with time zone) AND (tx >= '2012-04-06 00:00:00+00'::timestamp with time zone)) -> Index Scan using twitter_tweet_transmission_id on twitter_tweet (cost=0.00..5816.73 rows=4125 width=12) (actual time=0.003..0.277 rows=393 loops=1071) Index Cond: (timecode_transmission.id = twitter_tweet.transmission_id)Total runtime: 470.421 msNon-default configuration (dev, PG9.1):default_statistics_target = 100 # pgtune wizard 2012-01-11maintenance_work_mem = 400MB # pgtune wizard 2012-01-11constraint_exclusion = partition # pgtune wizard 2012-01-11checkpoint_completion_target = 0.9 # pgtune wizard 2012-01-11effective_cache_size = 7552MB # pgtune wizard 2012-01-11work_mem = 50MB # pgtune wizard 2012-01-11wal_buffers = 16MB # pgtune wizard 2012-01-11checkpoint_segments = 16 # pgtune wizard 2012-01-11shared_buffers = 3840MB # pgtune wizard 2012-01-11max_connections = 80 # pgtune wizard 2012-01-11Non-default configuration (prod, PG8.4)maintenance_work_mem = 960MBcheckpoint_completion_target = 0.9checkpoint_segments = 32wal_buffers = 16MBeffective_cache_size = 11GBshared_buffers = 4GBmax_connections = 32work_mem = 100MB\n--Dan Fairs | [email protected] | www.fezconsulting.com",
"msg_date": "Fri, 4 May 2012 15:24:33 +0100",
"msg_from": "Dan Fairs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unexpected sequence scan"
},
{
"msg_contents": "Dan Fairs <[email protected]> wrote:\n \n> I have a query which is running slowly, and the query plan shows\n> an unexpected sequence scan where I'd have expected the planner to\n> use an index. \n \nLooking at the actual row counts compared to run time, it appears\nthat the active portion of your data set is heavily cached. In such\nan environment, I would add these lines to postgresql.conf, to\nbetter model costs:\n \nseq_page_cost = 0.1\nrandom_page_cost = 0.1 # or maybe slightly higher\ncpu_tuple_cost = 0.03\n \n-Kevin\n",
"msg_date": "Fri, 04 May 2012 09:40:32 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected sequence scan"
},
{
"msg_contents": "Dan Fairs <[email protected]> writes:\n\n> I have a query which is running slowly, and the query plan shows an\nunexpected sequence scan where I'd have expected the planner to use an\nindex. Setting enable_seqscan=off causes the planner to use the index as\nexpected.\n\nThat hashjoin plan doesn't look at all unreasonable to me. The fact\nthat it actually comes out a lot slower than the nestloop with inner\nindexscan suggests that you must be running with the large table\ncompletely cached in RAM. If that's the normal state of affairs for your\ndatabase, you should consider decreasing the random_page_cost setting\nso that the planner will plan appropriately.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 May 2012 10:43:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected sequence scan "
},
{
"msg_contents": "Hi Tom, Kevin,\n\n>> I have a query which is running slowly, and the query plan shows an\n> unexpected sequence scan where I'd have expected the planner to use an\n> index. Setting enable_seqscan=off causes the planner to use the index as\n> expected.\n> \n> That hashjoin plan doesn't look at all unreasonable to me. The fact\n> that it actually comes out a lot slower than the nestloop with inner\n> indexscan suggests that you must be running with the large table\n> completely cached in RAM. If that's the normal state of affairs for your\n> database, you should consider decreasing the random_page_cost setting\n> so that the planner will plan appropriately.\n> \n\nA very quick test of the settings that Kevin posted produce a much better plan and faster response to that query (at least on my dev machine) I'll read up more on those settings before changing production, but it looks good - thanks very much!\n\nCheers,\nDan\n--\nDan Fairs | [email protected] | www.fezconsulting.com\n\n\nHi Tom, Kevin,I have a query which is running slowly, and the query plan shows anunexpected sequence scan where I'd have expected the planner to use anindex. Setting enable_seqscan=off causes the planner to use the index asexpected.That hashjoin plan doesn't look at all unreasonable to me. The factthat it actually comes out a lot slower than the nestloop with innerindexscan suggests that you must be running with the large tablecompletely cached in RAM. If that's the normal state of affairs for yourdatabase, you should consider decreasing the random_page_cost settingso that the planner will plan appropriately.A very quick test of the settings that Kevin posted produce a much better plan and faster response to that query (at least on my dev machine) I'll read up more on those settings before changing production, but it looks good - thanks very much!Cheers,Dan\n--Dan Fairs | [email protected] | www.fezconsulting.com",
"msg_date": "Fri, 4 May 2012 15:56:12 +0100",
"msg_from": "Dan Fairs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected sequence scan"
}
] |
[
{
"msg_contents": "Hi,\nI'm seeing poor query performance using partitioned tables with check\nconstraints, seems like the plan is much worse than when querying the\nindividual partitions manually.\n\nselect version(); --> PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu,\ncompiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n\nuname -a --> Linux 2.6.32-5-amd64 #1 SMP Mon Oct 3 03:59:20 UTC 2011\nx86_64 GNU/Linux\n\n(postgresql.conf included at the end)\n\nOutput of EXPLAIN ANALYZE follows, but here are the tables in question:\n\nHere's the empty parent table:\n\n \\d+ ircevents\n\n Column | Type | Modifiers\n-----------+---------+------------------------\n buffer | integer | not null\n id | bigint | not null <---------------- microsecs since 1970-01-01\n type | text | not null\n highlight | boolean | not null default false\n json | text | not null\n...\nChild tables: ircevents_201008,\n ircevents_201009,\n ...\n ircevents_201211,\n ircevents_201212\n\nAnd one example child table (they are all the same apart from\nnon-overlapping check constraints):\n\n\\d+ ircevents_201204\n Table \"public.ircevents_201204\"\n Column | Type | Modifiers | Storage | Description\n-----------+---------+------------------------+----------+-------------\n buffer | integer | not null | plain |\n id | bigint | not null | plain |\n type | text | not null | extended |\n highlight | boolean | not null default false | plain |\n json | text | not null | extended |\nIndexes:\n \"ircevents_201204_idx\" UNIQUE, btree (buffer, id)\n \"ircevents_201204_highlight_idx\" btree (highlight) WHERE highlight\n= true\nCheck constraints:\n \"ircevents_201204_id_check\" CHECK (id >= (date_part('epoch'::text,\n'2012-04-01 00:00:00'::timestamp without time zone)::bigint * 1000000)\nAND id < (date_part('epoch'::text, '2012-05-01 00:00:00'::timestamp\nwithout time zone)::bigint * 1000000))\nInherits: ircevents\n\n\nThe tables experience heavy insert/select load for the month in\nquestion, then less selects after that. update/delete to these tables\nis very rare.\nThe ircevents_201204 table has ~200 million rows\n\nLet's use a 20-day range spanning only one month:\n\nircevents=# select date_part('epoch', '2012-04-02'::timestamp without\ntime zone)::bigint * 1000000;\n ?column?\n------------------\n 1333317600000000\n(1 row)\n\nircevents=# select date_part('epoch', '2012-04-22'::timestamp without\ntime zone)::bigint * 1000000;\n ?column?\n------------------\n 1335045600000000\n(1 row)\n\n\nThe next two queries are the crux of the problem for me:\n\n\nEXPLAIN ANALYZE\nSELECT id, type, json\nFROM ircevents\nWHERE\nbuffer = 116780 AND\nid BETWEEN 1333317600000000 AND 1335045600000000\nORDER BY id DESC limit 100;\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=80506.28..80506.53 rows=100 width=134) (actual\ntime=0.179..0.196 rows=40 loops=1)\n -> Sort (cost=80506.28..80558.01 rows=20692 width=134) (actual\ntime=0.178..0.185 rows=40 loops=1)\n Sort Key: public.ircevents.id\n Sort Method: quicksort Memory: 33kB\n -> Result (cost=0.00..79715.45 rows=20692 width=134)\n(actual time=0.039..0.121 rows=40 loops=1)\n -> Append (cost=0.00..79715.45 rows=20692 width=134)\n(actual time=0.037..0.111 rows=40 loops=1)\n -> Seq Scan on ircevents (cost=0.00..0.00\nrows=1 width=72) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((id >= 1333317600000000::bigint)\nAND (id <= 1335045600000000::bigint) AND (buffer = 116780))\n -> Bitmap Heap Scan on ircevents_201204\nircevents (cost=914.36..79715.45 rows=20691 width=134) (actual\ntime=0.035..0.103 rows=40 loops=1)\n Recheck Cond: ((buffer = 116780) AND (id >=\n1333317600000000::bigint) AND (id <= 1335045600000000::bigint))\n -> Bitmap Index Scan on\nircevents_201204_idx (cost=0.00..909.18 rows=20691 width=0) (actual\ntime=0.023..0.023 rows=40 loops=1)\n Index Cond: ((buffer = 116780) AND\n(id >= 1333317600000000::bigint) AND (id <= 1335045600000000::bigint))\n Total runtime: 0.243 ms\n(13 rows)\n\n(note that the above plan demonstrates that constraint exclusion is\nactive, since it only queries the empty parent table, and the\nappropriate partitioned table)\n\nCompare the cost of that vs. specifying the partitioned table manually:\n\n\nEXPLAIN ANALYZE\nSELECT id, type, json\nFROM ircevents_201204\nWHERE buffer = 116780 AND\nid BETWEEN 1333317600000000 AND 1335045600000000\nORDER BY id DESC limit 100;\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..404.02 rows=100 width=134) (actual\ntime=0.024..0.071 rows=40 loops=1)\n -> Index Scan Backward using ircevents_201204_idx on\nircevents_201204 (cost=0.00..83595.11 rows=20691 width=134) (actual\ntime=0.023..0.062 rows=40 loops=1)\n Index Cond: ((buffer = 116780) AND (id >=\n1333317600000000::bigint) AND (id <= 1335045600000000::bigint))\n Total runtime: 0.102 ms\n(4 rows)\n\nQuerying the partition directly uses an \"index scan backward\", which\nseems the best approach.\n\nI see similar plans if the id range spans multiple tables - and I get\na much more efficient plan if manually construct a query by UNIONing\nall the relevant partitions together.\n\n\nIs there anything I can do to make querying to parent table in this\nfashion use \"Index Scan Backward\" on the appropriate partitions, and\nthus be as fast as querying the partitions directly?\n\nThanks,\n\nRJ\n\nPS Here is my postgresql.conf: (the server has 16GB/ and two mirrored\npairs of disks with pg_xlog on a different pair to the data)\n\nlisten_addresses = '*'\nmax_connections = 20 # (change requires restart)\nshared_buffers = 3GB # min 128kB\nwork_mem = 50MB # min 64kB\nmaintenance_work_mem = 500MB # min 1MB \"50mb per gig, ish\"\nsynchronous_commit = off # synchronization level; on,\noff, or local\nwal_buffers = 16MB # min 32kB, -1 sets based on\nshared_buffers\ncheckpoint_segments = 100 # in logfile segments, min 1, 16MB each\neffective_cache_size = 8GB # how much is the OS gonna use\nfor caching disk stuff?\nlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\ntrack_activities = on\ntrack_counts = on\ntrack_functions = pl # none, pl, all\nupdate_process_title = on\ndatestyle = 'iso, mdy'\nlc_messages = 'C' # locale for system error message\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n",
"msg_date": "Fri, 4 May 2012 16:01:00 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioned/inherited tables with check constraints causing slower\n\tquery plans"
},
{
"msg_contents": "Richard Jones <[email protected]> writes:\n> I'm seeing poor query performance using partitioned tables with check\n> constraints, seems like the plan is much worse than when querying the\n> individual partitions manually.\n\n> select version(); --> PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu,\n> compiled by gcc (Debian 4.4.5-8) 4.4.5, 64-bit\n\nI get a reasonable-looking plan when I try to duplicate this issue in\n9.1 branch tip. I think the reason you're not getting the right\nbehavior is that you are missing this as-yet-unreleased patch:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=ef03b34550e3577c4be3baa25b70787f5646c57b\nwhich means it can't figure out that the available index on the child\ntable produces the desired sort order. If you're in a position to\ncompile from source, a current nightly snapshot of the 9.1 branch\nought to work for you; otherwise, wait for 9.1.4.\n\n(Note: although that patch is a one-liner, I would *not* recommend\ntrying to just cherry-pick the patch by itself; I think it probably\ninteracts with other planner fixes made since 9.1.1.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 May 2012 12:39:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned/inherited tables with check constraints causing\n\tslower query plans"
},
{
"msg_contents": "On 4 May 2012 17:39, Tom Lane <[email protected]> wrote:\n> I get a reasonable-looking plan when I try to duplicate this issue in\n> 9.1 branch tip. I think the reason you're not getting the right\n> behavior is that you are missing this as-yet-unreleased patch:\n> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=ef03b34550e3577c4be3baa25b70787f5646c57b\n> which means it can't figure out that the available index on the child\n> table produces the desired sort order. If you're in a position to\n> compile from source, a current nightly snapshot of the 9.1 branch\n> ought to work for you; otherwise, wait for 9.1.4.\n\n\nThanks, this did the trick - here's the output when I switched to 9.1 snapshot:\n\nircevents=# select version();\n version\n----------------------------------------------------------------------------------------------\n PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n4.4.5-8) 4.4.5, 64-bit\n(1 row)\n\nircevents=# explain analyze SELECT id, type, json FROM ircevents WHERE\nbuffer = 116780 AND id BETWEEN 1325458800000000 AND 1330642800000000\nORDER BY id DESC LIMIT 100;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------\n Limit (cost=0.05..202.45 rows=100 width=135) (actual\ntime=176.429..237.766 rows=100 loops=1)\n -> Result (cost=0.05..68161.99 rows=33677 width=135) (actual\ntime=176.426..237.735 rows=100 loops=1)\n -> Merge Append (cost=0.05..68161.99 rows=33677 width=135)\n(actual time=176.426..237.708 rows=100 loops=1)\n Sort Key: public.ircevents.id\n -> Sort (cost=0.01..0.02 rows=1 width=72) (actual\ntime=0.009..0.009 rows=0 loops=1)\n Sort Key: public.ircevents.id\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on ircevents (cost=0.00..0.00\nrows=1 width=72) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((id >= 1325458800000000::bigint)\nAND (id <= 1330642800000000::bigint) AND (buffer = 116780))\n -> Index Scan Backward using ircevents_201201_idx on\nircevents_201201 ircevents (cost=0.00..8811.15 rows=2181 width=133)\n(actual time=76.356..136.91\n7 rows=12 loops=1)\n Index Cond: ((buffer = 116780) AND (id >=\n1325458800000000::bigint) AND (id <= 1330642800000000::bigint))\n -> Index Scan Backward using ircevents_201202_idx on\nircevents_201202 ircevents (cost=0.00..54963.83 rows=30613 width=135)\n(actual time=47.333..48.0\n25 rows=88 loops=1)\n Index Cond: ((buffer = 116780) AND (id >=\n1325458800000000::bigint) AND (id <= 1330642800000000::bigint))\n -> Index Scan Backward using ircevents_201203_idx on\nircevents_201203 ircevents (cost=0.00..3629.22 rows=882 width=134)\n(actual time=52.724..52.724\nrows=0 loops=1)\n Index Cond: ((buffer = 116780) AND (id >=\n1325458800000000::bigint) AND (id <= 1330642800000000::bigint))\n Total runtime: 237.889 ms\n(16 rows)\n\n\nSo yes, it's using \"index scan backwards\" - and fixes my problem, thanks!\n\nBit reluctant to put the machine into production with a non-release\nversion of postgres, I'll wait for 9.1.4 to make an official\nappearance.\n\nRegards,\nRJ\n",
"msg_date": "Fri, 4 May 2012 18:00:21 +0100",
"msg_from": "Richard Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioned/inherited tables with check constraints\n\tcausing slower query plans"
}
] |
[
{
"msg_contents": "Hello,\n\nI've heard from some people that synchronous streaming replication has \nsevere performance impact on the primary. They said that the transaction \nthroughput of TPC-C like benchmark (perhaps DBT-2) decreased by 50%. I'm \nsorry I haven't asked them about their testing environment, because they \njust gave me their experience. They think that this result is much worse \nthan some commercial database.\n\nI'm surprised. I know that the amount of transaction logs of PostgreSQL is \nlarger than other databases because it it logs the entire row for each \nupdate operation instead of just changed columns, and because of full page \nwrites. But I can't (and don't want to) believe that those have such big \nnegative impact.\n\nDoes anyone have any experience of benchmarking synchronous streaming \nreplication under TPC-C or similar write-heavy workload? Could anybody give \nme any performance evaluation result if you don't mind?\n\nRegards\nMauMau\n\n\n",
"msg_date": "Wed, 9 May 2012 22:06:17 +0900",
"msg_from": "\"MauMau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Could synchronous streaming replication really degrade the\n\tperformance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 8:06 AM, MauMau <[email protected]> wrote:\n> Hello,\n>\n> I've heard from some people that synchronous streaming replication has\n> severe performance impact on the primary. They said that the transaction\n> throughput of TPC-C like benchmark (perhaps DBT-2) decreased by 50%. I'm\n> sorry I haven't asked them about their testing environment, because they\n> just gave me their experience. They think that this result is much worse\n> than some commercial database.\n\nI can't speak for other databases, but it's only natural to assume\nthat tps must drop. At minimum, you have to add the latency of\ncommunication and remote sync operation to your transaction time. For\nvery short transactions this adds up to a lot of extra work relative\nto the transaction itself.\n\nmerlin\n",
"msg_date": "Wed, 9 May 2012 08:58:04 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 3:58 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, May 9, 2012 at 8:06 AM, MauMau <[email protected]> wrote:\n>> I've heard from some people that synchronous streaming replication has\n>> severe performance impact on the primary. They said that the transaction\n>> throughput of TPC-C like benchmark (perhaps DBT-2) decreased by 50%. I'm\n>> sorry I haven't asked them about their testing environment, because they\n>> just gave me their experience. They think that this result is much worse\n>> than some commercial database.\n>\n> I can't speak for other databases, but it's only natural to assume\n> that tps must drop. At minimum, you have to add the latency of\n> communication and remote sync operation to your transaction time. For\n> very short transactions this adds up to a lot of extra work relative\n> to the transaction itself.\n\nActually I would expect 50% degradation if both databases run on\nidentical hardware: the second instance needs to do the same work\n(i.e. write WAL AND ensure it reached the disk) before it can\nacknowledge.\n\n\"When requesting synchronous replication, each commit of a write\ntransaction will wait until confirmation is received that the commit\nhas been written to the transaction log on disk of both the primary\nand standby server.\"\nhttp://www.postgresql.org/docs/9.1/static/warm-standby.html#SYNCHRONOUS-REPLICATION\n\nI am not sure whether the replicant can be triggered to commit to disk\nbefore the commit to disk on the master has succeeded; if that was the\ncase there would be true serialization => 50%.\n\nThis sounds like it could actually be the case (note the \"after it commits\"):\n\"When synchronous replication is requested the transaction will wait\nafter it commits until it receives confirmation that the transfer has\nbeen successful.\"\nhttp://wiki.postgresql.org/wiki/Synchronous_replication\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 9 May 2012 17:41:00 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 12:41 PM, Robert Klemme\n<[email protected]> wrote:\n> I am not sure whether the replicant can be triggered to commit to disk\n> before the commit to disk on the master has succeeded; if that was the\n> case there would be true serialization => 50%.\n>\n> This sounds like it could actually be the case (note the \"after it commits\"):\n> \"When synchronous replication is requested the transaction will wait\n> after it commits until it receives confirmation that the transfer has\n> been successful.\"\n> http://wiki.postgresql.org/wiki/Synchronous_replication\n\nThat should only happen for very short transactions.\nIIRC, WAL records can be sent to the slaves before the transaction in\nthe master commits, so bigger transactions would see higher\nparallelism.\n",
"msg_date": "Wed, 9 May 2012 12:45:50 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 5:45 PM, Claudio Freire <[email protected]> wrote:\n> On Wed, May 9, 2012 at 12:41 PM, Robert Klemme\n> <[email protected]> wrote:\n>> I am not sure whether the replicant can be triggered to commit to disk\n>> before the commit to disk on the master has succeeded; if that was the\n>> case there would be true serialization => 50%.\n>>\n>> This sounds like it could actually be the case (note the \"after it commits\"):\n>> \"When synchronous replication is requested the transaction will wait\n>> after it commits until it receives confirmation that the transfer has\n>> been successful.\"\n>> http://wiki.postgresql.org/wiki/Synchronous_replication\n>\n> That should only happen for very short transactions.\n> IIRC, WAL records can be sent to the slaves before the transaction in\n> the master commits, so bigger transactions would see higher\n> parallelism.\n\nI considered that as well. But the question is: when are they written\nto disk in the slave? If they are in buffer cache until data is\nsynched to disk then you only gain a bit of advantage by earlier\nsending (i.e. network latency). Assuming a high bandwidth and low\nlatency network (which you want to have in this case anyway) that gain\nis probably not big compared to the time it takes to ensure WAL is\nwritten to disk. I do not know implementation details but *if* the\nserver triggers sync only after its own sync has succeeded *then* you\nbasically have serialization and you need to wait twice the time.\n\nFor small TX OTOH network overhead might relatively large compared to\nWAL IO (for example with a battery backed cache in the controller)\nthat it shows. Since we do not know the test cases which lead to the\n50% statement we can probably only speculate. Ultimately each\nindividual setup and workload has to be tested.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 9 May 2012 19:03:14 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 12:03 PM, Robert Klemme\n<[email protected]> wrote:\n> On Wed, May 9, 2012 at 5:45 PM, Claudio Freire <[email protected]> wrote:\n>> On Wed, May 9, 2012 at 12:41 PM, Robert Klemme\n>> <[email protected]> wrote:\n>>> I am not sure whether the replicant can be triggered to commit to disk\n>>> before the commit to disk on the master has succeeded; if that was the\n>>> case there would be true serialization => 50%.\n>>>\n>>> This sounds like it could actually be the case (note the \"after it commits\"):\n>>> \"When synchronous replication is requested the transaction will wait\n>>> after it commits until it receives confirmation that the transfer has\n>>> been successful.\"\n>>> http://wiki.postgresql.org/wiki/Synchronous_replication\n>>\n>> That should only happen for very short transactions.\n>> IIRC, WAL records can be sent to the slaves before the transaction in\n>> the master commits, so bigger transactions would see higher\n>> parallelism.\n>\n> I considered that as well. But the question is: when are they written\n> to disk in the slave? If they are in buffer cache until data is\n> synched to disk then you only gain a bit of advantage by earlier\n> sending (i.e. network latency). Assuming a high bandwidth and low\n> latency network (which you want to have in this case anyway) that gain\n> is probably not big compared to the time it takes to ensure WAL is\n> written to disk. I do not know implementation details but *if* the\n> server triggers sync only after its own sync has succeeded *then* you\n> basically have serialization and you need to wait twice the time.\n>\n> For small TX OTOH network overhead might relatively large compared to\n> WAL IO (for example with a battery backed cache in the controller)\n> that it shows. Since we do not know the test cases which lead to the\n> 50% statement we can probably only speculate. Ultimately each\n> individual setup and workload has to be tested.\n\nyeah. note the upcoming 9.2 synchronous_commit=remote_write setting is\nintended to improve this situation by letting the transaction go a bit\nearlier -- the slave basically only has to acknowledge receipt of the\ndata.\n\n\nmerlin\n",
"msg_date": "Wed, 9 May 2012 13:58:04 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "From: \"Merlin Moncure\" <[email protected]>\n> On Wed, May 9, 2012 at 8:06 AM, MauMau <[email protected]> wrote:\n>> Hello,\n>>\n>> I've heard from some people that synchronous streaming replication has\n>> severe performance impact on the primary. They said that the transaction\n>> throughput of TPC-C like benchmark (perhaps DBT-2) decreased by 50%. I'm\n>> sorry I haven't asked them about their testing environment, because they\n>> just gave me their experience. They think that this result is much worse\n>> than some commercial database.\n>\n> I can't speak for other databases, but it's only natural to assume\n> that tps must drop. At minimum, you have to add the latency of\n> communication and remote sync operation to your transaction time. For\n> very short transactions this adds up to a lot of extra work relative\n> to the transaction itself.\n\nYes, I understand it is natural for the response time of each transaction to \ndouble or more. But I think the throughput drop would be amortized among \nmultiple simultaneous transactions. So, 50% throughput decrease seems \nunreasonable.\n\nIf this thinking is correct, and some could kindly share his/her past \nperformance evaluation results (ideally of DBT-2), I want to say to my \nacquaintance \"hey, community people experience better performance, so you \nmay need to review your configuration.\"\n\nRegards\nMauMau\n\n",
"msg_date": "Thu, 10 May 2012 07:34:04 +0900",
"msg_from": "\"MauMau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Could synchronous streaming replication really degrade the\n\tperformance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 7:34 PM, MauMau <[email protected]> wrote:\n>> I can't speak for other databases, but it's only natural to assume\n>> that tps must drop. At minimum, you have to add the latency of\n>> communication and remote sync operation to your transaction time. For\n>> very short transactions this adds up to a lot of extra work relative\n>> to the transaction itself.\n>\n>\n> Yes, I understand it is natural for the response time of each transaction to\n> double or more. But I think the throughput drop would be amortized among\n> multiple simultaneous transactions. So, 50% throughput decrease seems\n> unreasonable.\n\nI'm pretty sure it depends a lot on the workload. Knowing the\nmethodology used that arrived to those figures is critical. Was the\nthoughput decrease measured against no replication, or asynchronous\nreplication? How many clients were used? What was the workload like?\nWas it CPU bound? I/O bound? Read-mostly?\n\nWe have asynchronous replication in production and thoughput has not\nchanged relative to no replication. I cannot see how making it\nsynchronous would change thoughput, as it only induces waiting time on\nthe clients, but no extra work. I can only assume the test didn't use\nenough clients to saturate the hardware under high-latency situations,\nor clients were somehow experiencing application-specific contention.\n\nI don't know the code, but knowing how synchronous replication works,\nI would say any such drop under high concurrency would be a bug,\ncontention among waiting processes or something like that, that needs\nto be fixed.\n",
"msg_date": "Wed, 9 May 2012 19:46:51 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "From: \"Claudio Freire\" <[email protected]>\nOn Wed, May 9, 2012 at 7:34 PM, MauMau <[email protected]> wrote:\n>> Yes, I understand it is natural for the response time of each transaction \n>> to\n>> double or more. But I think the throughput drop would be amortized among\n>> multiple simultaneous transactions. So, 50% throughput decrease seems\n>> unreasonable.\n\n> I'm pretty sure it depends a lot on the workload. Knowing the\n> methodology used that arrived to those figures is critical. Was the\n> thoughput decrease measured against no replication, or asynchronous\n> replication? How many clients were used? What was the workload like?\n> Was it CPU bound? I/O bound? Read-mostly?\n\n> We have asynchronous replication in production and thoughput has not\n> changed relative to no replication. I cannot see how making it\n> synchronous would change thoughput, as it only induces waiting time on\n> the clients, but no extra work. I can only assume the test didn't use\n> enough clients to saturate the hardware under high-latency situations,\n> or clients were somehow experiencing application-specific contention.\n\nThank you for your experience and opinion.\n\nThe workload is TPC-C-like write-heavy one; DBT-2. They compared the \nthroughput of synchronous replication case against that of no replication \ncase.\n\nToday, they told me that they ran the test on two virtual machines on a \nsingle physical machine. They also used pgpool-II in both cases. In \naddition, they may have ran the applications and pgpool-II on the same \nvirtual machine as the database server.\n\nIt sounded to me that the resource is so scarce that concurrency was low, or \nyour assumption may be correct. I'll hear more about their environment from \nthem.\n\nBTW it's pity that I cannot find any case study of performance of the \nflagship feature of PostgreSQL 9.0/9.1, streaming replication...\n\nRegards\nMauMau\n\n",
"msg_date": "Thu, 10 May 2012 20:34:43 +0900",
"msg_from": "\"MauMau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Could synchronous streaming replication really degrade the\n\tperformance of the primary?"
},
{
"msg_contents": "MauMau, 10.05.2012 13:34:\n> Today, they told me that they ran the test on two virtual machines on\n> a single physical machine.\n\nWhich means that both databases shared the same I/O system (harddisks).\nThererfor it's not really surprising that the overall performance goes down if you increase the I/O load.\n\nA more realistic test (at least in my opinion) would have been to have two separate computers with two separate I/O systems\n\n\n\n",
"msg_date": "Thu, 10 May 2012 14:10:48 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade the\n\tperformance of the primary?"
},
{
"msg_contents": "On 10 Květen 2012, 13:34, MauMau wrote:\n> The workload is TPC-C-like write-heavy one; DBT-2. They compared the\n> throughput of synchronous replication case against that of no replication\n> case.\n>\n> Today, they told me that they ran the test on two virtual machines on a\n> single physical machine. They also used pgpool-II in both cases. In\n> addition, they may have ran the applications and pgpool-II on the same\n> virtual machine as the database server.\n\nSo they've run a test that is usually I/O bound on a single machine? If\nthey've used the same I/O devices, I'm surprised the degradation was just\n50%. If you have a system that can handle X IOPS, and you run two\ninstances there, each will get ~X/2 IOPS. No magic can help here.\n\nEven if they used separate I/O devices, there are probably many things\nthat are shared and can become a bottleneck in a virtualized environment.\n\nThe setup is definitely very suspicious.\n\n> It sounded to me that the resource is so scarce that concurrency was low,\n> or\n> your assumption may be correct. I'll hear more about their environment\n> from\n> them.\n>\n> BTW it's pity that I cannot find any case study of performance of the\n> flagship feature of PostgreSQL 9.0/9.1, streaming replication...\n\nThere were some nice talks about performance impact of sync rep, for\nexample this one:\n\n http://www.2ndquadrant.com/static/2quad/media/pdfs/talks/SyncRepDurability.pdf\n\nThere's also a video:\n\n http://www.youtube.com/watch?v=XL7j8hTd6R8\n\nTomas\n\n",
"msg_date": "Thu, 10 May 2012 14:20:02 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "On Wed, May 9, 2012 at 5:34 PM, MauMau <[email protected]> wrote:\n> Yes, I understand it is natural for the response time of each transaction to\n> double or more. But I think the throughput drop would be amortized among\n> multiple simultaneous transactions. So, 50% throughput decrease seems\n> unreasonable.\n>\n> If this thinking is correct, and some could kindly share his/her past\n> performance evaluation results (ideally of DBT-2), I want to say to my\n> acquaintance \"hey, community people experience better performance, so you\n> may need to review your configuration.\"\n\nIt seems theoretically possible to interleave the processing on both\nsides but 50% reduction in throughput for latency bound transactions\nseems to be broadly advertised as what to reasonably expect for sync\nrep with 9.1.\n\n9.2 beta is arriving shortly and when it does I suggest experimenting\nwith the new remote_write feature of sync_rep over non-production\nworkloads.\n\nmerlin\n",
"msg_date": "Thu, 10 May 2012 08:24:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "From: \"Tomas Vondra\" <[email protected]>\n> There were some nice talks about performance impact of sync rep, for\n> example this one:\n>\n> \n> http://www.2ndquadrant.com/static/2quad/media/pdfs/talks/SyncRepDurability.pdf\n>\n> There's also a video:\n>\n> http://www.youtube.com/watch?v=XL7j8hTd6R8\n\nThanks. The video is especially interesting. I'll tell my aquaintance to \ncheck it, too.\n\nRegards\nMauMau\n\n\n",
"msg_date": "Fri, 11 May 2012 00:11:01 +0900",
"msg_from": "\"MauMau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Could synchronous streaming replication really degrade the\n\tperformance of the primary?"
},
{
"msg_contents": "On Thu, May 10, 2012 at 8:34 PM, MauMau <[email protected]> wrote:\n> Today, they told me that they ran the test on two virtual machines on a\n> single physical machine. They also used pgpool-II in both cases. In\n> addition, they may have ran the applications and pgpool-II on the same\n> virtual machine as the database server.\n\nSo they compared the throughput of one server running on a single machine\n(non replication case) with that of two servers (i.e., master and\nstandby) running\non the same single machine (sync rep case)? The amount of CPU/Mem/IO\nresource available per server is not the same between those two cases. So\nISTM it's very unfair for sync rep case. In this situation, I'm not\nsurprised if I\nsee 50% performance degradation in sync rep case.\n\n> It sounded to me that the resource is so scarce that concurrency was low, or\n> your assumption may be correct. I'll hear more about their environment from\n> them.\n>\n> BTW it's pity that I cannot find any case study of performance of the\n> flagship feature of PostgreSQL 9.0/9.1, streaming replication...\n\nThough I cannot show the detail for some reasons, as far as I measured\nthe performance overhead of sync rep by using pgbench, the overhead of\nthroughput was less than 10%. When measuring sync rep, I used two\nset of physical machine and storage for the master and standby, and\nused 1Gbps network between them.\n\nRegards,\n\n-- \nFujii Masao\n",
"msg_date": "Fri, 11 May 2012 03:02:57 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Could synchronous streaming replication really degrade\n\tthe performance of the primary?"
},
{
"msg_contents": "From: \"Fujii Masao\" <[email protected]>\n> Though I cannot show the detail for some reasons, as far as I measured\n> the performance overhead of sync rep by using pgbench, the overhead of\n> throughput was less than 10%. When measuring sync rep, I used two\n> set of physical machine and storage for the master and standby, and\n> used 1Gbps network between them.\n\nFujii-san, thanks a million. That's valuable information. The overhead less \nthan 10% under perhaps high concurrency and write heavy workload exceeds my \nexpectation. Great!\n\nThough I couldn't contact the testers today, I'll tell this to them next \nweek.\n\nRegards\nMauMau\n\n",
"msg_date": "Fri, 11 May 2012 22:19:21 +0900",
"msg_from": "\"MauMau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Could synchronous streaming replication really degrade the\n\tperformance of the primary?"
}
] |
[
{
"msg_contents": "Hi All,\n\nIs there any max limit set on sequences that can be created on the database\n? Also would like to know if we create millions of sequences in a single db\nwhat is the downside of it.\n\nRegards\nVidhya\n\nHi All,Is there any max limit set on sequences that can be created on the database ? Also would like to know if we create millions of sequences in a single db what is the downside of it.RegardsVidhya",
"msg_date": "Fri, 11 May 2012 16:20:25 +0530",
"msg_from": "Vidhya Bondre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Maximum number of sequences that can be created"
},
{
"msg_contents": "On Fri, May 11, 2012 at 12:50 PM, Vidhya Bondre <[email protected]> wrote:\n> Is there any max limit set on sequences that can be created on the database\n> ? Also would like to know if we create millions of sequences in a single db\n> what is the downside of it.\n\nOn the contrary: what would be the /advantage/ of being able to create\nmillions of sequences? What's the use case?\n\nCheers\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Fri, 11 May 2012 12:56:30 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "2012/5/11 Robert Klemme <[email protected]>\n\n> On Fri, May 11, 2012 at 12:50 PM, Vidhya Bondre <[email protected]>\n> wrote:\n> > Is there any max limit set on sequences that can be created on the\n> database\n> > ? Also would like to know if we create millions of sequences in a single\n> db\n> > what is the downside of it.\n>\n\nThe sequences AFAIK are accounted as relations. Large list of relations may\nslowdown different system utilities like vacuuming (or may not, depends on\nqueries and indexes on pg_class).\n\n\n>\n> On the contrary: what would be the /advantage/ of being able to create\n> millions of sequences? What's the use case?\n>\n>\nWe are using sequences as statistics counters - they produce almost no\nperformance impact and we can tolerate it's non-transactional nature. I can\nimaging someone who wants to have a sequence per user or other relation\nrow.\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2012/5/11 Robert Klemme <[email protected]>\nOn Fri, May 11, 2012 at 12:50 PM, Vidhya Bondre <[email protected]> wrote:\n> Is there any max limit set on sequences that can be created on the database\n> ? Also would like to know if we create millions of sequences in a single db\n> what is the downside of it.The sequences AFAIK are accounted as relations. Large list of relations may slowdown different system utilities like vacuuming (or may not, depends on queries and indexes on pg_class).\n \n\nOn the contrary: what would be the /advantage/ of being able to create\nmillions of sequences? What's the use case?We are using sequences as statistics counters - they produce almost no performance impact and we can tolerate it's non-transactional nature. I can imaging someone who wants to have a sequence per user or other relation row. \n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Sun, 13 May 2012 11:12:58 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "On Sun, May 13, 2012 at 10:12 AM, Віталій Тимчишин <[email protected]> wrote:\n> 2012/5/11 Robert Klemme <[email protected]>\n\n>> On the contrary: what would be the /advantage/ of being able to create\n>> millions of sequences? What's the use case?\n>\n> We are using sequences as statistics counters - they produce almost no\n> performance impact and we can tolerate it's non-transactional nature. I can\n> imaging someone who wants to have a sequence per user or other relation\n> row.\n\nI can almost see the point. But my natural choice in that case would\nbe a table with two columns. Would that actually be so much less\nefficient? Of course you'd have fully transactional behavior and thus\nlocking.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sun, 13 May 2012 12:56:20 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]> wrote:\n\n>\n>\n> 2012/5/11 Robert Klemme <[email protected]>\n>\n>> On Fri, May 11, 2012 at 12:50 PM, Vidhya Bondre <[email protected]>\n>> wrote:\n>> > Is there any max limit set on sequences that can be created on the\n>> database\n>> > ? Also would like to know if we create millions of sequences in a\n>> single db\n>> > what is the downside of it.\n>>\n>\n> The sequences AFAIK are accounted as relations. Large list of relations\n> may slowdown different system utilities like vacuuming (or may not, depends\n> on queries and indexes on pg_class).\n>\n\nNot \"may slow down.\" Change that to \"will slow down and possibly corrupt\"\nyour system.\n\nIn my experience (PG 8.4.x), the system can handle in the neighborhood of\n100,000 relations pretty well. Somewhere over 1,000,000 relations, the\nsystem becomes unusable. It's not that it stops working -- day-to-day\noperations such as querying your tables and running your applications\ncontinue to work. But system operations that have to scan for table\ninformation seem to freeze (maybe they run out of memory, or are\nencountering an O(N^2) operation and simply cease to complete).\n\nFor example, pg_dump fails altogether. After 24 hours, it won't even start\nwriting to its output file. The auto-completion in psql of table and\ncolumn names freezes the system. It takes minutes to drop one table.\nStuff like that. You'll have a system that works, but can't be backed up,\ndumped, repaired or managed.\n\nAs I said, this was 8.4.x. Things may have changed in 9.x.\n\nCraig\n\nOn Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]> wrote:\n2012/5/11 Robert Klemme <[email protected]>\nOn Fri, May 11, 2012 at 12:50 PM, Vidhya Bondre <[email protected]> wrote:\n> Is there any max limit set on sequences that can be created on the database\n> ? Also would like to know if we create millions of sequences in a single db\n> what is the downside of it.The sequences AFAIK are accounted as relations. Large list of relations may slowdown different system utilities like vacuuming (or may not, depends on queries and indexes on pg_class).\nNot \"may slow down.\" Change that to \"will slow down and possibly corrupt\" your system.In my experience (PG 8.4.x), the system can handle in the neighborhood of 100,000 relations pretty well. Somewhere over 1,000,000 relations, the system becomes unusable. It's not that it stops working -- day-to-day operations such as querying your tables and running your applications continue to work. But system operations that have to scan for table information seem to freeze (maybe they run out of memory, or are encountering an O(N^2) operation and simply cease to complete).\nFor example, pg_dump fails altogether. After 24 hours, it won't even start writing to its output file. The auto-completion in psql of table and column names freezes the system. It takes minutes to drop one table. Stuff like that. You'll have a system that works, but can't be backed up, dumped, repaired or managed.\nAs I said, this was 8.4.x. Things may have changed in 9.x.Craig",
"msg_date": "Sun, 13 May 2012 09:01:06 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "On Sun, May 13, 2012 at 9:01 AM, Craig James <[email protected]> wrote:\n>\n> In my experience (PG 8.4.x), the system can handle in the neighborhood of\n> 100,000 relations pretty well. Somewhere over 1,000,000 relations, the\n> system becomes unusable. It's not that it stops working -- day-to-day\n> operations such as querying your tables and running your applications\n> continue to work. But system operations that have to scan for table\n> information seem to freeze (maybe they run out of memory, or are\n> encountering an O(N^2) operation and simply cease to complete).\n>\n> For example, pg_dump fails altogether. After 24 hours, it won't even start\n> writing to its output file. The auto-completion in psql of table and column\n> names freezes the system. It takes minutes to drop one table. Stuff like\n> that. You'll have a system that works, but can't be backed up, dumped,\n> repaired or managed.\n>\n> As I said, this was 8.4.x. Things may have changed in 9.x.\n\nI think some of those things might have improved, but enough of them\nhave not improved, or not by enough.\n\nSo I agree with your assessment, under 9.2 having millions of\nsequences might technically work, but would render the database\nvirtually unmanageable.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 14 May 2012 15:50:05 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "2012/5/13 Robert Klemme <[email protected]>\n\n> On Sun, May 13, 2012 at 10:12 AM, Віталій Тимчишин <[email protected]>\n> wrote:\n> > 2012/5/11 Robert Klemme <[email protected]>\n>\n> >> On the contrary: what would be the /advantage/ of being able to create\n> >> millions of sequences? What's the use case?\n> >\n> > We are using sequences as statistics counters - they produce almost no\n> > performance impact and we can tolerate it's non-transactional nature. I\n> can\n> > imaging someone who wants to have a sequence per user or other relation\n> > row.\n>\n> I can almost see the point. But my natural choice in that case would\n> be a table with two columns. Would that actually be so much less\n> efficient? Of course you'd have fully transactional behavior and thus\n> locking.\n>\n\nWe've had concurrency problems with table solution (a counter that is\nupdated by many concurrent queries), so we traded transactionality for\nspeed. We are actually using this data to graph pretty graphs in nagios, so\nit's quite OK. But we have only ~10 sequences, not millions :)\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2012/5/13 Robert Klemme <[email protected]>\nOn Sun, May 13, 2012 at 10:12 AM, Віталій Тимчишин <[email protected]> wrote:\n> 2012/5/11 Robert Klemme <[email protected]>\n\n>> On the contrary: what would be the /advantage/ of being able to create\n>> millions of sequences? What's the use case?\n>\n> We are using sequences as statistics counters - they produce almost no\n> performance impact and we can tolerate it's non-transactional nature. I can\n> imaging someone who wants to have a sequence per user or other relation\n> row.\n\nI can almost see the point. But my natural choice in that case would\nbe a table with two columns. Would that actually be so much less\nefficient? Of course you'd have fully transactional behavior and thus\nlocking.We've had concurrency problems with table solution (a counter that is updated by many concurrent queries), so we traded transactionality for speed. We are actually using this data to graph pretty graphs in nagios, so it's quite OK. But we have only ~10 sequences, not millions :)\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Tue, 15 May 2012 09:29:11 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "On Tuesday, May 15, 2012 08:29:11 AM Віталій Тимчишин wrote:\n> 2012/5/13 Robert Klemme <[email protected]>\n> \n> > On Sun, May 13, 2012 at 10:12 AM, Віталій Тимчишин <[email protected]>\n> > \n> > wrote:\n> > > 2012/5/11 Robert Klemme <[email protected]>\n> > > \n> > >> On the contrary: what would be the /advantage/ of being able to create\n> > >> millions of sequences? What's the use case?\n> > > \n> > > We are using sequences as statistics counters - they produce almost no\n> > > performance impact and we can tolerate it's non-transactional nature. I\n> > \n> > can\n> > \n> > > imaging someone who wants to have a sequence per user or other\n> > > relation row.\n> > \n> > I can almost see the point. But my natural choice in that case would\n> > be a table with two columns. Would that actually be so much less\n> > efficient? Of course you'd have fully transactional behavior and thus\n> > locking.\n> \n> We've had concurrency problems with table solution (a counter that is\n> updated by many concurrent queries), so we traded transactionality for\n> speed. We are actually using this data to graph pretty graphs in nagios, so\n> it's quite OK. But we have only ~10 sequences, not millions :)\nI would rather suggest going with a suming table if you need to do something \nlike that:\n\nsequence_id | value\n1 | 3434334\n1 | 1\n1 | -1\n1 | 1\n1 | 1\n...\n\nYou then can get the current value with SELECT SUM(value) WHERE sequence_id = \n1. For garbage collection you can delete those values and insert the newly \nsummed up value again.\nThat solution won't ever block if done right.\n\nAndres\n",
"msg_date": "Tue, 15 May 2012 12:57:53 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "Hi,\n\nOn Tue, May 15, 2012 at 12:57 PM, Andres Freund <[email protected]> wrote:\n\n> I would rather suggest going with a suming table if you need to do something\n> like that:\n>\n> sequence_id | value\n> 1 | 3434334\n> 1 | 1\n> 1 | -1\n> 1 | 1\n> 1 | 1\n> ...\n>\n> You then can get the current value with SELECT SUM(value) WHERE sequence_id =\n> 1. For garbage collection you can delete those values and insert the newly\n> summed up value again.\n> That solution won't ever block if done right.\n\nI was going to suggest another variant which would not need GC but\nwould also increase concurrency:\n\nsequence_id | hash | value\n1 | 0 | 3\n1 | 1 | 9\n1 | 2 | 0\n1 | 3 | 2\n...\n\nwith PK = (sequence_id, hash) and hash in a fixed range (say 0..15).\n\nValue would be obtained the same way, i.e. via\nSELECT SUM(value) FROM T WHERE sequence_id = 1\n\nThe hash value would have to be calculated\n\n - at session start time (cheap but might reduce concurrency due to\nsmall number of changes) or\n - at TX start time (more expensive but probably better concurrency\ndue to higher change rate)\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 15 May 2012 13:45:59 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "On Sun, May 13, 2012 at 10:01 AM, Craig James <[email protected]> wrote:\n\n>\n> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]>wrote:\n>\n>>\n>> The sequences AFAIK are accounted as relations. Large list of relations\n>> may slowdown different system utilities like vacuuming (or may not, depends\n>> on queries and indexes on pg_class).\n>>\n>\n> Not \"may slow down.\" Change that to \"will slow down and possibly corrupt\"\n> your system.\n>\n> In my experience (PG 8.4.x), the system can handle in the neighborhood of\n> 100,000 relations pretty well. Somewhere over 1,000,000 relations, the\n> system becomes unusable. It's not that it stops working -- day-to-day\n> operations such as querying your tables and running your applications\n> continue to work. But system operations that have to scan for table\n> information seem to freeze (maybe they run out of memory, or are\n> encountering an O(N^2) operation and simply cease to complete).\n>\n\nGlad I found this thread.\n\nIs this 1M relation mark for the whole database cluster or just for a\nsingle database within the cluster?\n\nThanks,\n-Greg\n\nOn Sun, May 13, 2012 at 10:01 AM, Craig James <[email protected]> wrote:\nOn Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]> wrote:\nThe sequences AFAIK are accounted as relations. Large list of relations may slowdown different system utilities like vacuuming (or may not, depends on queries and indexes on pg_class).\nNot \"may slow down.\" Change that to \"will slow down and possibly corrupt\" your system.In my experience (PG 8.4.x), the system can handle in the neighborhood of 100,000 relations pretty well. Somewhere over 1,000,000 relations, the system becomes unusable. It's not that it stops working -- day-to-day operations such as querying your tables and running your applications continue to work. But system operations that have to scan for table information seem to freeze (maybe they run out of memory, or are encountering an O(N^2) operation and simply cease to complete).\nGlad I found this thread.Is this 1M relation mark for the whole database cluster or just for a single database within the cluster?\nThanks,-Greg",
"msg_date": "Fri, 25 May 2012 05:58:38 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
},
{
"msg_contents": "On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg <[email protected]>wrote:\n\n> On Sun, May 13, 2012 at 10:01 AM, Craig James <[email protected]>wrote:\n>\n>>\n>> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]>wrote:\n>>\n>>>\n>>> The sequences AFAIK are accounted as relations. Large list of relations\n>>> may slowdown different system utilities like vacuuming (or may not, depends\n>>> on queries and indexes on pg_class).\n>>>\n>>\n>> Not \"may slow down.\" Change that to \"will slow down and possibly\n>> corrupt\" your system.\n>>\n>> In my experience (PG 8.4.x), the system can handle in the neighborhood of\n>> 100,000 relations pretty well. Somewhere over 1,000,000 relations, the\n>> system becomes unusable. It's not that it stops working -- day-to-day\n>> operations such as querying your tables and running your applications\n>> continue to work. But system operations that have to scan for table\n>> information seem to freeze (maybe they run out of memory, or are\n>> encountering an O(N^2) operation and simply cease to complete).\n>>\n>\n> Glad I found this thread.\n>\n> Is this 1M relation mark for the whole database cluster or just for a\n> single database within the cluster?\n>\n\nI don't know. When I discovered this, our system only had a few dozen\ndatabases, and I never conducted any experiments. We had to write our own\nversion of pg_dump to get the data out of the damaged system, and then\nreload from scratch. And it's not a \"hard\" number. Even at a million\nrelation things work ... they just bog down dramatically. By the time I\ngot to 5 million relations (a rogue script was creating 50,000 tables per\nday and not cleaning up), the system was effectively unusable.\n\nCraig\n\n\n\n> Thanks,\n> -Greg\n>\n>\n\nOn Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg <[email protected]> wrote:\nOn Sun, May 13, 2012 at 10:01 AM, Craig James <[email protected]> wrote:\nOn Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]> wrote:\nThe sequences AFAIK are accounted as relations. Large list of relations may slowdown different system utilities like vacuuming (or may not, depends on queries and indexes on pg_class).\nNot \"may slow down.\" Change that to \"will slow down and possibly corrupt\" your system.In my experience (PG 8.4.x), the system can handle in the neighborhood of 100,000 relations pretty well. Somewhere over 1,000,000 relations, the system becomes unusable. It's not that it stops working -- day-to-day operations such as querying your tables and running your applications continue to work. But system operations that have to scan for table information seem to freeze (maybe they run out of memory, or are encountering an O(N^2) operation and simply cease to complete).\nGlad I found this thread.Is this 1M relation mark for the whole database cluster or just for a single database within the cluster?\nI don't know. When I discovered this, our system only had a few dozen databases, and I never conducted any experiments. We had to write our own version of pg_dump to get the data out of the damaged system, and then reload from scratch. And it's not a \"hard\" number. Even at a million relation things work ... they just bog down dramatically. By the time I got to 5 million relations (a rogue script was creating 50,000 tables per day and not cleaning up), the system was effectively unusable.\nCraig\nThanks,-Greg",
"msg_date": "Fri, 25 May 2012 08:04:18 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Maximum number of sequences that can be created"
}
] |
[
{
"msg_contents": "<p>Learn how to make as much as $500 each day using a groundbreaking system<br><a href=\"http://www.info-guard.cba.pl/newsjournal/23StuartStewart/\">http://www.info-guard.cba.pl/newsjournal/23StuartStewart/</a></p>\n\nLearn how to make as much as $500 each day using a groundbreaking systemhttp://www.info-guard.cba.pl/newsjournal/23StuartStewart/",
"msg_date": "Sat, 12 May 2012 06:01:10 -0700 (PDT)",
"msg_from": "Jessica Richard <[email protected]>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Hello, all.\n\nWe've reached to the point when we would like to try SSDs. We've got a\ncentral DB currently 414 GB in size and increasing. Working set does not\nfit into our 96GB RAM server anymore.\nSo, the main question is what to take. Here what we've got:\n1) Intel 320. Good, but slower then current generation sandforce drives\n2) Intel 330. Looks like cheap 520 without capacitor\n3) Intel 520. faster then 320 No capacitor.\n4) OCZ Vertex 3 Pro - No available. Even on OCZ site\n5) OCZ Deneva - can't find in my country :)\nWe are using Areca controller with BBU. So as for me, question is: Can 520\nseries be set up to handle fsyncs correctly? We've got the Areca to handle\nbuffering.\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nHello, all.We've reached to the point when we would like to try SSDs. We've got a central DB currently 414 GB in size and increasing. Working set does not fit into our 96GB RAM server anymore.\nSo, the main question is what to take. Here what we've got:1) Intel 320. Good, but slower then current generation sandforce drives2) Intel 330. Looks like cheap 520 without capacitor\n3) Intel 520. faster then 320 No capacitor.4) OCZ Vertex 3 Pro - No available. Even on OCZ site5) OCZ Deneva - can't find in my country :) We are using Areca controller with BBU. So as for me, question is: Can 520 series be set up to handle fsyncs correctly? We've got the Areca to handle buffering.\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Tue, 15 May 2012 18:21:09 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSD selection"
},
{
"msg_contents": "On 5/15/2012 9:21 AM, Віталій Тимчишин wrote:\n>\n>\n> We've reached to the point when we would like to try SSDs. We've got a \n> central DB currently 414 GB in size and increasing. Working set does \n> not fit into our 96GB RAM server anymore.\n> So, the main question is what to take. Here what we've got:\n> 1) Intel 320. Good, but slower then current generation sandforce drives\n> 2) Intel 330. Looks like cheap 520 without capacitor\n> 3) Intel 520. faster then 320 No capacitor.\n> 4) OCZ Vertex 3 Pro - No available. Even on OCZ site\n> 5) OCZ Deneva - can't find in my country :)\n>\n\nIs the 710 series too costly for your deployment ?\nI ask because that would be the obvious choice for a database (much \nbetter write endurance than any of the drives above, and less likely to \nsuffer from firmware bugs or unpleasant GC behavior).\nWe've been running them in production for a few months with zero \nproblems and great performance.\nThe price on the 710's tends to vary on whether they're in stock : \nNewEgg is currently showing $1100 for the 300G drive, but no stock.\n\n\n",
"msg_date": "Tue, 15 May 2012 11:08:06 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On Tue, May 15, 2012 at 12:08 PM, David Boreham <[email protected]> wrote:\n>> We've reached to the point when we would like to try SSDs. We've got a\n>> central DB currently 414 GB in size and increasing. Working set does not fit\n>> into our 96GB RAM server anymore.\n>> So, the main question is what to take. Here what we've got:\n>> 1) Intel 320. Good, but slower then current generation sandforce drives\n>> 2) Intel 330. Looks like cheap 520 without capacitor\n>> 3) Intel 520. faster then 320 No capacitor.\n>> 4) OCZ Vertex 3 Pro - No available. Even on OCZ site\n>> 5) OCZ Deneva - can't find in my country :)\n>>\n>\n> Is the 710 series too costly for your deployment ?\n> I ask because that would be the obvious choice for a database (much better\n> write endurance than any of the drives above, and less likely to suffer from\n> firmware bugs or unpleasant GC behavior).\n> We've been running them in production for a few months with zero problems\n> and great performance.\n> The price on the 710's tends to vary on whether they're in stock : NewEgg is\n> currently showing $1100 for the 300G drive, but no stock.\n\nthis. I think you have two choices today -- intel 320 and intel 710\ndepending on how much writing you plan to do. ocz vertex 3 might be a\n3rd choice, but it's vaporware atm.\n\nmerlin\n",
"msg_date": "Tue, 15 May 2012 12:22:20 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On Tue, May 15, 2012 at 8:21 AM, Віталій Тимчишин <[email protected]> wrote:\n> We are using Areca controller with BBU. So as for me, question is: Can 520\n> series be set up to handle fsyncs correctly?\n\nNo.\n\nThe cause for capacitors on SSD logic boards is that fsyncs aren't\nflushed to NAND media, and hence persisted, immediately. SSDs are\ndivided into \"pages\", called \"erase blocks\" (usually much larger than\nthe filesystem-level block size; I don't know offhand what the block\nsize is on the 710, but on the older X-25 drives, it was 128K). All\nwrites are accumulated in the on-board cache into erase block sized\nchunks, and *then* flushed to the NAND media. In a power-loss\nsituation, the contents of that cache won't be preserved unless you\nhave a capacitor. In some drives, you can disable the on-board cache,\nbut that does absolutely atrocious things both to your drive's\nperformance, and its longevity.\n\nAs the other posters in this thread have said, your best bet is\nprobably the Intel 710 series drives, though I'd still expect some\n320-series drives in a RAID configuration to still be pretty\nstupendously fast.\n\nrls\n\n-- \n:wq\n",
"msg_date": "Tue, 15 May 2012 11:16:25 -0700",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On 5/15/2012 12:16 PM, Rosser Schwarz wrote:\n> As the other posters in this thread have said, your best bet is\n> probably the Intel 710 series drives, though I'd still expect some\n> 320-series drives in a RAID configuration to still be pretty\n> stupendously fast.\nOne thing to mention is that the 710 are not faster than 320 series \n(unless in your definition of fast you count potential GC pauses of course).\nThe 710's primary claim to fame is that it has endurance and GC \ncharacteristics designed for server and database use (constant load, \nheavy write load).\n\nSo 320 drives will be just as fast, if not faster, but they will wear \nout much more quickly (possibly not a concern for the OP in his \ndeployment) and may suffer from unwelcome GC pauses.\n\n\n\n",
"msg_date": "Tue, 15 May 2012 14:00:08 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On Tue, May 15, 2012 at 3:00 PM, David Boreham <[email protected]> wrote:\n> On 5/15/2012 12:16 PM, Rosser Schwarz wrote:\n>>\n>> As the other posters in this thread have said, your best bet is\n>> probably the Intel 710 series drives, though I'd still expect some\n>> 320-series drives in a RAID configuration to still be pretty\n>> stupendously fast.\n>\n> One thing to mention is that the 710 are not faster than 320 series (unless\n> in your definition of fast you count potential GC pauses of course).\n> The 710's primary claim to fame is that it has endurance and GC\n> characteristics designed for server and database use (constant load, heavy\n> write load).\n>\n> So 320 drives will be just as fast, if not faster, but they will wear out\n> much more quickly (possibly not a concern for the OP in his deployment) and\n> may suffer from unwelcome GC pauses.\n\nAlthough your assertion 100% supported by intel's marketing numbers,\nthere are some contradicting numbers out there that show the drives\noffering pretty similar performance. For example, look here:\nhttp://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4 and\nyou can see that 4k aligned writes are giving quite similar results\n(14k iops) even though the 710 is only rated for 2700 iops while the\n320 is rated for 21000 IOPS. Other benchmarks also show similar\nresults.\n\n???\n\nI have a theory that Intel rates their drives for IOPS based on the\nresults of 'Wheel of Fortune'. This will be confirmed when you start\nseeing drives with ratings of 'Bankrupt', 'Trip to Las Vegas', etc.\nThese must be the same guys that came up with the technical\nexplanation for the write through caching for the X25-M.\n\nmerlin\n",
"msg_date": "Wed, 16 May 2012 12:01:23 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On 5/16/2012 11:01 AM, Merlin Moncure wrote:\n> Although your assertion 100% supported by intel's marketing numbers,\n> there are some contradicting numbers out there that show the drives\n> offering pretty similar performance. For example, look here:\n> http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4 and\n> you can see that 4k aligned writes are giving quite similar results\n> (14k iops) even though the 710 is only rated for 2700 iops while the\n> 320 is rated for 21000 IOPS. Other benchmarks also show similar\n> results.\nActually I said the same thing you're saying : that the two series will \ndeliver similar performance.\n\nThe spec numbers however would be for worst case conditions (in the case \nof the 710).\nI'm not convinced that those tests were exercising the worst case part \nof the envelope.\n\n\n",
"msg_date": "Wed, 16 May 2012 11:45:39 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On Wed, May 16, 2012 at 12:45 PM, David Boreham <[email protected]> wrote:\n> On 5/16/2012 11:01 AM, Merlin Moncure wrote:\n>>\n>> Although your assertion 100% supported by intel's marketing numbers,\n>> there are some contradicting numbers out there that show the drives\n>> offering pretty similar performance. For example, look here:\n>> http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4 and\n>> you can see that 4k aligned writes are giving quite similar results\n>> (14k iops) even though the 710 is only rated for 2700 iops while the\n>> 320 is rated for 21000 IOPS. Other benchmarks also show similar\n>> results.\n>\n> Actually I said the same thing you're saying : that the two series will\n> deliver similar performance.\n>\n> The spec numbers however would be for worst case conditions (in the case of\n> the 710).\n> I'm not convinced that those tests were exercising the worst case part of\n> the envelope.\n\nYeah -- you might be right -- their numbers are based on iometer which\nlooks like it runs lower than other tests (see here:\nhttp://www.storagereview.com/intel_ssd_710_series_review_200gb). I\nstill find it interesting the 320 is spec'd so much higher though. I\nguess I spoke to soon -- it looks it has to do with the life extending\nattributes of the drive. Benchmarks are all over the place though.\n\nmerlin\n",
"msg_date": "Wed, 16 May 2012 13:53:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "¿Wizard Merlin?\n\n\n\n\n>________________________________\n> De: Merlin Moncure <[email protected]>\n>Para: David Boreham <[email protected]> \n>CC: [email protected] \n>Enviado: Miércoles 16 de Mayo de 2012 13:53\n>Asunto: Re: [PERFORM] SSD selection\n> \n>On Wed, May 16, 2012 at 12:45 PM, David Boreham <[email protected]> wrote:\n>> On 5/16/2012 11:01 AM, Merlin Moncure wrote:\n>>>\n>>> Although your assertion 100% supported by intel's marketing numbers,\n>>> there are some contradicting numbers out there that show the drives\n>>> offering pretty similar performance. For example, look here:\n>>> http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4 and\n>>> you can see that 4k aligned writes are giving quite similar results\n>>> (14k iops) even though the 710 is only rated for 2700 iops while the\n>>> 320 is rated for 21000 IOPS. Other benchmarks also show similar\n>>> results.\n>>\n>> Actually I said the same thing you're saying : that the two series will\n>> deliver similar performance.\n>>\n>> The spec numbers however would be for worst case conditions (in the case of\n>> the 710).\n>> I'm not convinced that those tests were exercising the worst case part of\n>> the envelope.\n>\n>Yeah -- you might be right -- their numbers are based on iometer which\n>looks like it runs lower than other tests (see here:\n>http://www.storagereview.com/intel_ssd_710_series_review_200gb). I\n>still find it interesting the 320 is spec'd so much higher though. I\n>guess I spoke to soon -- it looks it has to do with the life extending\n>attributes of the drive. Benchmarks are all over the place though.\n>\n>merlin\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n¿Wizard Merlin? De: Merlin Moncure <[email protected]> Para: David Boreham <[email protected]> CC: [email protected] Enviado: Miércoles 16 de Mayo de 2012 13:53 Asunto: Re:\n [PERFORM] SSD selection On Wed, May 16, 2012 at 12:45 PM, David Boreham <[email protected]> wrote:> On 5/16/2012 11:01 AM, Merlin Moncure wrote:>>>> Although your assertion 100% supported by intel's marketing numbers,>> there are some contradicting numbers out there that show the drives>> offering pretty similar performance. For example, look here:>> http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4 and>> you can see that 4k aligned writes are giving quite similar results>> (14k iops) even though the 710 is only rated for 2700 iops while the>> 320 is rated for 21000 IOPS. Other benchmarks also show similar>>\n results.>> Actually I said the same thing you're saying : that the two series will> deliver similar performance.>> The spec numbers however would be for worst case conditions (in the case of> the 710).> I'm not convinced that those tests were exercising the worst case part of> the envelope.Yeah -- you might be right -- their numbers are based on iometer whichlooks like it runs lower than other tests (see here:http://www.storagereview.com/intel_ssd_710_series_review_200gb). Istill find it interesting the 320 is spec'd so much higher though. Iguess I spoke to soon -- it looks it has to do with the life extendingattributes of the drive. Benchmarks are all over the place though.merlin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 16 May 2012 20:01:02 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
},
{
"msg_contents": "On 05/16/2012 01:01 PM, Merlin Moncure wrote:\n> Although your assertion 100% supported by intel's marketing numbers,\n> there are some contradicting numbers out there that show the drives\n> offering pretty similar performance. For example, look here:\n> http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4 and\n> you can see that 4k aligned writes are giving quite similar results\n> (14k iops) even though the 710 is only rated for 2700 iops while the\n> 320 is rated for 21000 IOPS. Other benchmarks also show similar\n> results.\n\nI wrote something talking about all the ways the two drives differ at \nhttp://blog.2ndquadrant.com/intel_ssds_lifetime_and_the_32/\n\nWhat the 710 numbers are saying is that you can't push lots of tiny \nwrites out at a high IOPS without busting the drive's lifetime \nestimates. You can either get a really high IOPS of small writes (320) \nor a smaller IOPS of writes that are done more efficiently in terms of \nflash longevity (710). You can't get both at the same time. The 710 \nmay ultimately throttle its speed back to meet lifetime specifications \nas the drive fills, it's really hard to benchmark the differences \nbetween the two series.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Mon, 28 May 2012 08:14:05 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD selection"
}
] |
[
{
"msg_contents": ">We've reached to the point when we would like to try SSDs. We've got a\n>central DB currently 414 GB in size and increasing. Working set does not\n>fit into our 96GB RAM server anymore.\n>So, the main question is what to take. Here what we've got:\n>1) Intel 320. Good, but slower then current generation sandforce drives\n>2) Intel 330. Looks like cheap 520 without capacitor\n>3) Intel 520. faster then 320 No capacitor.\n>4) OCZ Vertex 3 Pro - No available. Even on OCZ site\n>5) OCZ Deneva - can't find in my country :)\n>We are using Areca controller with BBU. So as for me, question is: Can 520\n>series be set up to handle fsyncs correctly? We've got the Areca to handle\n>buffering.\n\nI was thinking the same thing, setting up a new server with ssds instead of hdds as I currently have\nand was wondering what the thoughts are in regards to using raid (it would probably be a dell R710 card -\nany comments on these as they are new). I was planning on using intel 320s as the 710s are a little too\npricey at the moment and we aren't massively heavy on the writes (but can revisit this in the next\n6months/year if required). I was thinking raid 10 as I've done with the hdds, but not sure if this is the\nbest choice for ssds, given wear level and firmware is likely to be the same, I'd expect concurrent\nfailure on a stipe. Therefore I'd imagine using an alternate mfr/drive for the mirrors is a better bet?\n\nWhat are peoples thoughts on using a non enterprise drive for this - the choice of enterprise drives is limited :(\n\nI was thinking if we have sudden power failure then mark the consumer drive as bad and rebuild it from the other one,\nor is this highly risky?\n\nDoes any one do anything different?\n\nThanks\n\nJohn\n\n",
"msg_date": "Tue, 15 May 2012 22:09:48 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [pgsql-performance] Daily digest v1.3606 (10 messages)"
},
{
"msg_contents": "On Tue, May 15, 2012 at 4:09 PM, John Lister <[email protected]> wrote:\n>> We've reached to the point when we would like to try SSDs. We've got a\n>> central DB currently 414 GB in size and increasing. Working set does not\n>> fit into our 96GB RAM server anymore.\n>> So, the main question is what to take. Here what we've got:\n>> 1) Intel 320. Good, but slower then current generation sandforce drives\n>> 2) Intel 330. Looks like cheap 520 without capacitor\n>> 3) Intel 520. faster then 320 No capacitor.\n>> 4) OCZ Vertex 3 Pro - No available. Even on OCZ site\n>> 5) OCZ Deneva - can't find in my country :)\n>> We are using Areca controller with BBU. So as for me, question is: Can 520\n>> series be set up to handle fsyncs correctly? We've got the Areca to handle\n>> buffering.\n>\n>\n> I was thinking the same thing, setting up a new server with ssds instead of\n> hdds as I currently have\n> and was wondering what the thoughts are in regards to using raid (it would\n> probably be a dell R710 card -\n> any comments on these as they are new). I was planning on using intel 320s\n> as the 710s are a little too\n> pricey at the moment and we aren't massively heavy on the writes (but can\n> revisit this in the next\n> 6months/year if required). I was thinking raid 10 as I've done with the\n> hdds, but not sure if this is the\n> best choice for ssds, given wear level and firmware is likely to be the\n> same, I'd expect concurrent\n> failure on a stipe. Therefore I'd imagine using an alternate mfr/drive for\n> the mirrors is a better bet?\n>\n> What are peoples thoughts on using a non enterprise drive for this - the\n> choice of enterprise drives is limited :(\n>\n> I was thinking if we have sudden power failure then mark the consumer drive\n> as bad and rebuild it from the other one,\n> or is this highly risky?\n\nI think the multiple vendor strategy is dicey as the only player in\nthe game that seems to have a reasonable enterprise product offering\nis intel. The devices should work within spec and should be phased\nout when they are approaching EOL. SMART gives good info regarding\nssd wear and should be checked at regular intervals.\n\nRegarding RAID, a good theoretical case could be made for RAID 5 on\nSSD since the 'write hole' penalty is going to be far less. I'd still\nbe sticking with raid 10 however if it was my stuff. Aside: I would\nalso be using software raid. I'm a big believer in mdadm on linux\nespecially when using SSD and it looks like support for trim here:\n\nhttp://serverfault.com/questions/227918/possible-to-get-ssd-trim-discard-working-on-ext4-lvm-software-raid-in-linu\n\nmerlin\n",
"msg_date": "Tue, 15 May 2012 16:52:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [pgsql-performance] Daily digest v1.3606 (10 messages)"
}
] |
[
{
"msg_contents": "This is outside of PG performance proper, but while testing pg_dump and\npg_restore performance on local-storage versus SAN storage I noticed a big\nperformance drops on in the SAN configuration. I'm told that both local\nstorage and SAN storage are 15k drives with local-storage running dual\nRAID1 configuration while HP StorageWorks SB40c SAN is 10x15k RAID1+0.\nRunning a hdparm -tT tests with different read-ahead, I see the following\ndifferences on /dev/sda (local-storage) and /dev/sdc (SAN storage). I'm\nshocked at the drop in buffered disk read performance 150MB/sec versus\n80MB/sec and surprised at the SAN variability at 1MB/sec versus 10MB/sec,\nlocal-storage and SAN storage respectively.\n\nFor those who, unlike me, have experience looking at SAN storage\nperformance, is the drop in buffered disk reads and large variability the\nexpected cost of centralized remote storage in SANs with fiber-channel\ncommunication, SAN fail-over, etc.\n\nIf you have any ideas or insights and/or if you know of a better suited\nforum for this question I'd sure appreciate the feedback.\n\n\nCheers,\n\nJan",
"msg_date": "Mon, 21 May 2012 08:17:36 -0600",
"msg_from": "Jan Nielsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "local-storage versus SAN sequential read performance comparison"
},
{
"msg_contents": "How fast is the link to your SAN? If it's Gigabit, then 80MB/s would\nbe pretty reasonable.\n\nAlso SANs are normally known for very good random access performance\nand not necessarily for fast sequential performance.\n\nOn Mon, May 21, 2012 at 8:17 AM, Jan Nielsen\n<[email protected]> wrote:\n> This is outside of PG performance proper, but while testing pg_dump and\n> pg_restore performance on local-storage versus SAN storage I noticed a big\n> performance drops on in the SAN configuration. I'm told that both local\n> storage and SAN storage are 15k drives with local-storage running dual RAID1\n> configuration while HP StorageWorks SB40c SAN is 10x15k RAID1+0. Running a\n> hdparm -tT tests with different read-ahead, I see the following differences\n> on /dev/sda (local-storage) and /dev/sdc (SAN storage). I'm shocked at the\n> drop in buffered disk read performance 150MB/sec versus 80MB/sec and\n> surprised at the SAN variability at 1MB/sec versus 10MB/sec, local-storage\n> and SAN storage respectively.\n>\n> For those who, unlike me, have experience looking at SAN storage\n> performance, is the drop in buffered disk reads and large variability the\n> expected cost of centralized remote storage in SANs with fiber-channel\n> communication, SAN fail-over, etc.\n>\n> If you have any ideas or insights and/or if you know of a better suited\n> forum for this question I'd sure appreciate the feedback.\n>\n>\n> Cheers,\n>\n> Jan\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 21 May 2012 09:41:12 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: local-storage versus SAN sequential read performance comparison"
},
{
"msg_contents": "On 05/21/2012 10:41 AM, Scott Marlowe wrote:\n\n> How fast is the link to your SAN? If it's Gigabit, then 80MB/s would\n> be pretty reasonable.\n\nWow, I didn't even think of that. I was looking up specs on the SAN. :)\n\nThose stats are pretty horrible for sequential reads. The SAN should \nimprove random reads, not mercilessly ruin sequential.\n\nThat does sound like the problem, though.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 21 May 2012 10:50:04 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: local-storage versus SAN sequential read performance\n comparison"
}
] |
[
{
"msg_contents": "\nDear List ,\n\nWe are having scalability issues with a high end hardware\n\nThe hardware is\nCPU = 4 * opteron 6272 with 16 cores ie Total = 64 cores. \nRAM = 128 GB DDR3\nDisk = High performance RAID10 with lots of 15K spindles and a working BBU Cache.\n\nnormally the 1 min load average of the system remains between 0.5 to 1.0 .\n\nThe problem is that sometimes there are spikes of load avg which \njumps to > 50 very rapidly ( ie from 0.5 to 50 within 10 secs) and \nit remains there for sometime and slowly reduces to normal value.\n\nDuring such times of high load average we observe that there is no IO wait \nin system and even CPU is 50% idle. In any case the IO Wait always remains < 1.0 % and \nis mostly 0. Hence the load is not due to high I/O wait which was generally\nthe case with our previous hardware.\n \nWe are puzzled why the CPU and DISK I/O system are not being utilized \nfully and would seek lists' wisdom on that.\n\nWe have setup sar to poll the system parameters every minute and \nthe data of which is graphed with cacti. If required any of the \nsystem parameters or postgresql parameter can easily be put under \ncacti monitoring and can be graphed.\n\nThe query load is mostly read only.\n \nIt is also possible to replicate the problem with pg_bench to some\nextent . I choose -s = 100 and -t=10000 , the load does shoot but not\nthat spectacularly as achieved by the real world usage.\n\nany help shall be greatly appreciated.\n\njust a thought, will it be a good idea to partition the host hardware \nto 4 equal virtual environments , ie 1 for master (r/w) and 3 slaves r/o\nand distribute the r/o load on the 3 slaves ?\n\n\nregds\nmallah\n",
"msg_date": "Thu, 24 May 2012 09:09:09 +0530 (IST)",
"msg_from": "\"Rajesh Kumar. Mallah\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": "On Thu, May 24, 2012 at 12:39 AM, Rajesh Kumar. Mallah\n<[email protected]> wrote:\n> The problem is that sometimes there are spikes of load avg which\n> jumps to > 50 very rapidly ( ie from 0.5 to 50 within 10 secs) and\n> it remains there for sometime and slowly reduces to normal value.\n>\n> During such times of high load average we observe that there is no IO wait\n> in system and even CPU is 50% idle. In any case the IO Wait always remains < 1.0 % and\n> is mostly 0. Hence the load is not due to high I/O wait which was generally\n> the case with our previous hardware.\n\nDo you experience decreased query performance?\n\nLoad can easily get to 64 (1 per core) without reaching its capacity.\nSo, unless you're experiencing decreased performance I wouldn't think\nmuch of it.\n\nDo you have mcelog running? as a cron or a daemon?\nSometimes, mcelog tends to crash in that way. We had to disable it in\nour servers because it misbehaved like that. It only makes load avg\nmeaningless, no performance impact, but being unable to accurately\nmeasure load is bad enough.\n",
"msg_date": "Thu, 24 May 2012 00:53:43 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": "\n----- \"Claudio Freire\" <[email protected]> wrote:\n\n| From: \"Claudio Freire\" <[email protected]>\n| To: \"Rajesh Kumar. Mallah\" <[email protected]>\n| Cc: [email protected]\n| Sent: Thursday, May 24, 2012 9:23:43 AM\n| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and CPU is idle\n|\n| On Thu, May 24, 2012 at 12:39 AM, Rajesh Kumar. Mallah\n| <[email protected]> wrote:\n| > The problem is that sometimes there are spikes of load avg which\n| > jumps to > 50 very rapidly ( ie from 0.5 to 50 within 10 secs) and\n| > it remains there for sometime and slowly reduces to normal value.\n| >\n| > During such times of high load average we observe that there is no\n| IO wait\n| > in system and even CPU is 50% idle. In any case the IO Wait always\n| remains < 1.0 % and\n| > is mostly 0. Hence the load is not due to high I/O wait which was\n| generally\n| > the case with our previous hardware.\n| \n| Do you experience decreased query performance?\n\n\nYes we do experience substantial application performance degradations.\n\n\n| \n| Load can easily get to 64 (1 per core) without reaching its capacity.\n| So, unless you're experiencing decreased performance I wouldn't think\n| much of it.\n\nI far as i understand ,\nLoad Avg is the average number of processes waiting to be run in past 1 , \n5 or 15 mins. A number > 1 would mean that countable number of processes\nwere waiting to be run. how can load of more than 1 and upto 64 be OK\nfor a 64 core machine ?\n\n\n\n| \n| Do you have mcelog running? as a cron or a daemon?\n\n\nNo we do not have mcelog.\n\nBTW the Postgresql version is : 9.1.3 which i forgot to mention \nin my last email.\n\n\nregds\nmallah.\n\n| Sometimes, mcelog tends to crash in that way. We had to disable it in\n| our servers because it misbehaved like that. It only makes load avg\n| meaningless, no performance impact, but being unable to accurately\n| measure load is bad enough.\n",
"msg_date": "Thu, 24 May 2012 10:56:47 +0530 (IST)",
"msg_from": "\"Rajesh Kumar. Mallah\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": "On Thu, May 24, 2012 at 2:26 AM, Rajesh Kumar. Mallah\n<[email protected]> wrote:\n> |\n> | Load can easily get to 64 (1 per core) without reaching its capacity.\n> | So, unless you're experiencing decreased performance I wouldn't think\n> | much of it.\n>\n> I far as i understand ,\n> Load Avg is the average number of processes waiting to be run in past 1 ,\n> 5 or 15 mins. A number > 1 would mean that countable number of processes\n> were waiting to be run. how can load of more than 1 and upto 64 be OK\n> for a 64 core machine ?\n\nLoad avg is the number of processes in the running queue, which can be\neither waiting to be run or actually running.\n\nSo if you had 100% CPU usage, then you'd most definitely have a load\navg of 64, which is neither good or bad. It may simply mean that\nyou're using your hardware's full potential.\n\nIf your processes are waiting but not using CPU or I/O time... all I\ncan think of is mcelog (it's the only application I've ever witnessed\ndoing that). Do check ps/top and try to find out which processes are\nin a waiting state to have a little more insight.\n",
"msg_date": "Thu, 24 May 2012 02:44:32 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": "On 05/24/2012 12:26 AM, Rajesh Kumar. Mallah wrote:\n>\n> ----- \"Claudio Freire\"<[email protected]> wrote:\n>\n> | From: \"Claudio Freire\"<[email protected]>\n> | To: \"Rajesh Kumar. Mallah\"<[email protected]>\n> | Cc: [email protected]\n> | Sent: Thursday, May 24, 2012 9:23:43 AM\n> | Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and CPU is idle\n> |\n> | On Thu, May 24, 2012 at 12:39 AM, Rajesh Kumar. Mallah\n> |<[email protected]> wrote:\n> |> The problem is that sometimes there are spikes of load avg which\n> |> jumps to> 50 very rapidly ( ie from 0.5 to 50 within 10 secs) and\n> |> it remains there for sometime and slowly reduces to normal value.\n> |>\n> |> During such times of high load average we observe that there is no\n> | IO wait\n> |> in system and even CPU is 50% idle. In any case the IO Wait always\n> | remains< 1.0 % and\n> |> is mostly 0. Hence the load is not due to high I/O wait which was\n> | generally\n> |> the case with our previous hardware.\n> |\n> | Do you experience decreased query performance?\n>\n>\n> Yes we do experience substantial application performance degradations.\n>\n>\n\nMaybe you are hitting some locks? If its not IO and not CPU then maybe something is getting locked and queries are piling up.\n\n-Andy\n",
"msg_date": "Thu, 24 May 2012 07:48:14 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": "Rajesh,\n\n* Rajesh Kumar. Mallah ([email protected]) wrote:\n> We are puzzled why the CPU and DISK I/O system are not being utilized \n> fully and would seek lists' wisdom on that.\n\nWhat OS is this? What kernel version?\n\n> just a thought, will it be a good idea to partition the host hardware \n> to 4 equal virtual environments , ie 1 for master (r/w) and 3 slaves r/o\n> and distribute the r/o load on the 3 slaves ?\n\nActually, it might help with 9.1, if you're really running into some\nscalability issues in our locking area.. You might review this:\n\nhttp://rhaas.blogspot.com/2012/04/did-i-say-32-cores-how-about-64.html\n\nThat's a pretty contrived test case, but I suppose it's possible your\ncase is actually close enough to be getting affected also..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 24 May 2012 11:57:37 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server ,\n no I/O wait and CPU is idle"
},
{
"msg_contents": "----- \"Stephen Frost\" <[email protected]> wrote:\n\n| From: \"Stephen Frost\" <[email protected]>\n| To: \"Rajesh Kumar. Mallah\" <[email protected]>\n| Cc: [email protected]\n| Sent: Thursday, May 24, 2012 9:27:37 PM\n| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and CPU is idle\n|\n| Rajesh,\n| \n| * Rajesh Kumar. Mallah ([email protected]) wrote:\n| > We are puzzled why the CPU and DISK I/O system are not being\n| utilized \n| > fully and would seek lists' wisdom on that.\n| \n| What OS is this? What kernel version?\n\nDear Frost ,\n\nWe are running linux with kernel 3.2.X \n(which has the lseek improvements)\n\n| \n| > just a thought, will it be a good idea to partition the host\n| hardware \n| > to 4 equal virtual environments , ie 1 for master (r/w) and 3\n| slaves r/o\n| > and distribute the r/o load on the 3 slaves ?\n| \n| Actually, it might help with 9.1, if you're really running into some\n| scalability issues in our locking area.. You might review this:\n| \n| http://rhaas.blogspot.com/2012/04/did-i-say-32-cores-how-about-64.html\n| \n| That's a pretty contrived test case, but I suppose it's possible your\n| case is actually close enough to be getting affected also..\n\nThanks for the reference , even i thought so (LockManager) ,\nbut we are actually also running out db max connections (also) \n( which is currently at 600) , when that happens something at \nthe beginning of the application stack also gets dysfunctional and it \nchanges the very input to the system. ( think of negative feedback systems ) \n\nIt is sort of complicated but i will definitely update list , \nwhen i get to the point of putting the blame on DB :-) .\n\nRegds\nMallah.\n\n| \n| \tThanks,\n| \n| \t\tStephen\n",
"msg_date": "Thu, 24 May 2012 22:29:55 +0530 (IST)",
"msg_from": "\"Rajesh Kumar. Mallah\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High load average in 64-core server ,\n no I/O wait and CPU is idle"
},
{
"msg_contents": "* Rajesh Kumar. Mallah ([email protected]) wrote:\n> We are running linux with kernel 3.2.X \n> (which has the lseek improvements)\n\nAh, good.\n\n> Thanks for the reference , even i thought so (LockManager) ,\n> but we are actually also running out db max connections (also) \n> ( which is currently at 600) , when that happens something at \n> the beginning of the application stack also gets dysfunctional and it \n> changes the very input to the system. ( think of negative feedback systems ) \n\nOh. Yeah, have you considered pgbouncer?\n\n> It is sort of complicated but i will definitely update list , \n> when i get to the point of putting the blame on DB :-) .\n\nOk. :)\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 24 May 2012 13:09:12 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server ,\n no I/O wait and CPU is idle"
},
{
"msg_contents": "On Thu, May 24, 2012 at 2:09 PM, Stephen Frost <[email protected]> wrote:\n> * Rajesh Kumar. Mallah ([email protected]) wrote:\n>> We are running linux with kernel 3.2.X\n>> (which has the lseek improvements)\n>\n> Ah, good.\n>\n>> Thanks for the reference , even i thought so (LockManager) ,\n>> but we are actually also running out db max connections (also)\n>> ( which is currently at 600) , when that happens something at\n>> the beginning of the application stack also gets dysfunctional and it\n>> changes the very input to the system. ( think of negative feedback systems )\n>\n> Oh. Yeah, have you considered pgbouncer?\n\nOr pooling at the application level. Many ORMs support connection\npooling and limiting out-of-the-box.\n\nIn essence, postgres should never bounce connections, it should all be\nhandled by the application or a previous pgbouncer, both of which\nwould do it more efficient and effectively.\n",
"msg_date": "Thu, 24 May 2012 18:30:19 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
}
] |
[
{
"msg_contents": "| \n| Load avg is the number of processes in the running queue, which can\n| be either waiting to be run or actually running.\n| \n| So if you had 100% CPU usage, then you'd most definitely have a load\n| avg of 64, which is neither good or bad. It may simply mean that\n| you're using your hardware's full potential.\n\n\nDear Claudio ,\n\nThanks for the reply and clarifying on the \"actually running\" part.\n\nbelow is a snapshot of the top output while the system was loaded.\n\ntop - 12:15:13 up 101 days, 19:01, 1 user, load average: 23.50, 18.89, 21.74\nTasks: 650 total, 11 running, 639 sleeping, 0 stopped, 0 zombie\nCpu(s): 26.5%us, 5.7%sy, 0.0%ni, 67.2%id, 0.0%wa, 0.0%hi, 0.6%si, 0.0%st\nMem: 131971752k total, 122933996k used, 9037756k free, 251544k buffers\nSwap: 33559780k total, 251916k used, 33307864k free, 116356252k cached\n\nOur applications does slowdown when loads are at that level. Can you please\ntell what else can be metered?\n\n\n| \n| If your processes are waiting but not using CPU or I/O time... all I\n| can think of is mcelog (it's the only application I've ever witnessed\n| doing that). Do check ps/top and try to find out which processes are\n| in a waiting state to have a little more insight.\n\n\nI will read more on the processes status and try to keep a close\neye over it. I shall be responding after a few hours on it.\n\nregds\nmallah.\n\n| \n| -- \n| Sent via pgsql-performance mailing list\n| ([email protected])\n| To make changes to your subscription:\n| http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 May 2012 12:24:17 +0530 (IST)",
"msg_from": "\"Rajesh Kumar. Mallah\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWe have a production database (postgresql 9.0) with more than 20,000 schemas\nand 40Gb size. In the past we had all that information in just one schema\nand pg_dump used to work just fine (2-3 hours to dump everything). Then we\ndecided to split the database into schemas, which makes a lot of sense for\nthe kind of information we store and the plans we have for the future. The\nproblem now is that pg_dump takes forever to finish (more than 24 hours) and\nwe just can't have consistent daily backups like we had in the past. When I\ntry to dump just one schema with almost nothing in it, it takes 12 minutes.\nWhen I try to dump a big schema with lots of information, it takes 14\nminutes. So pg_dump is clearly lost in the middle of so many schemas. The\nload on the machine is low (it is a hot standby replica db) and we have good\nconfigurations for memory, cache, shared_buffers and everything else. The\nperformance of the database itself is good, it is only pg_dump that is\ninefficient for the task. I have found an old discussion back in 2007 that\nseems to be quite related to this problem:\n\nhttp://postgresql.1045698.n5.nabble.com/5-minutes-to-pg-dump-nothing-tp1888814.html\n\nIt seems that pg_dump hasn't been tested with a huge number of schemas like\nthat. Does anyone have a solution or suggestions? Do you know if there are\npatches specific for this case?\n\nThanks in advance,\nHugo\n\n-----\nOfficial Nabble Administrator - we never ask for passwords.\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 24 May 2012 00:06:49 -0700 (PDT)",
"msg_from": "\"Hugo <Nabble>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 24, 2012 at 12:06 AM, Hugo <Nabble> <[email protected]> wrote:\n\n> Hi everyone,\n>\n> We have a production database (postgresql 9.0) with more than 20,000\n> schemas\n> and 40Gb size. In the past we had all that information in just one schema\n> and pg_dump used to work just fine (2-3 hours to dump everything). Then we\n> decided to split the database into schemas, which makes a lot of sense for\n> the kind of information we store and the plans we have for the future. The\n> problem now is that pg_dump takes forever to finish (more than 24 hours)\n> and\n> we just can't have consistent daily backups like we had in the past. When I\n> try to dump just one schema with almost nothing in it, it takes 12 minutes.\n> When I try to dump a big schema with lots of information, it takes 14\n> minutes. So pg_dump is clearly lost in the middle of so many schemas. The\n> load on the machine is low (it is a hot standby replica db) and we have\n> good\n> configurations for memory, cache, shared_buffers and everything else. The\n> performance of the database itself is good, it is only pg_dump that is\n> inefficient for the task. I have found an old discussion back in 2007 that\n> seems to be quite related to this problem:\n>\n>\n> http://postgresql.1045698.n5.nabble.com/5-minutes-to-pg-dump-nothing-tp1888814.html\n>\n> It seems that pg_dump hasn't been tested with a huge number of schemas like\n> that. Does anyone have a solution or suggestions? Do you know if there are\n> patches specific for this case?\n>\n\nHow many total relations do you have? I don't know if there is a limit to\nthe number of schemas, but I suspect when you went from one schema to\n20,000 schemas, you also went from N relations to 20000*N relations.\n\nSomewhere between 100,000 and 1 million total relations, Postgres starts to\nhave trouble. See this thread:\n\n http://permalink.gmane.org/gmane.comp.db.postgresql.performance/33254\n\n(Why is it that Google can't find these archives on postgresql.org?)\n\nCraig\n\n\n> Thanks in advance,\n> Hugo\n>\n> -----\n> Official Nabble Administrator - we never ask for passwords.\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Thu, May 24, 2012 at 12:06 AM, Hugo <Nabble> <[email protected]> wrote:\nHi everyone,\n\nWe have a production database (postgresql 9.0) with more than 20,000 schemas\nand 40Gb size. In the past we had all that information in just one schema\nand pg_dump used to work just fine (2-3 hours to dump everything). Then we\ndecided to split the database into schemas, which makes a lot of sense for\nthe kind of information we store and the plans we have for the future. The\nproblem now is that pg_dump takes forever to finish (more than 24 hours) and\nwe just can't have consistent daily backups like we had in the past. When I\ntry to dump just one schema with almost nothing in it, it takes 12 minutes.\nWhen I try to dump a big schema with lots of information, it takes 14\nminutes. So pg_dump is clearly lost in the middle of so many schemas. The\nload on the machine is low (it is a hot standby replica db) and we have good\nconfigurations for memory, cache, shared_buffers and everything else. The\nperformance of the database itself is good, it is only pg_dump that is\ninefficient for the task. I have found an old discussion back in 2007 that\nseems to be quite related to this problem:\n\nhttp://postgresql.1045698.n5.nabble.com/5-minutes-to-pg-dump-nothing-tp1888814.html\n\nIt seems that pg_dump hasn't been tested with a huge number of schemas like\nthat. Does anyone have a solution or suggestions? Do you know if there are\npatches specific for this case?How many total relations do you have? I don't know if there is a limit to the number of schemas, but I suspect when you went from one schema to 20,000 schemas, you also went from N relations to 20000*N relations.\nSomewhere between 100,000 and 1 million total relations, Postgres starts to have trouble. See this thread: http://permalink.gmane.org/gmane.comp.db.postgresql.performance/33254\n(Why is it that Google can't find these archives on postgresql.org?)Craig\n\nThanks in advance,\nHugo\n\n-----\nOfficial Nabble Administrator - we never ask for passwords.\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 24 May 2012 08:21:36 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 24, 2012 at 8:21 AM, Craig James <[email protected]> wrote:\n>\n>\n> On Thu, May 24, 2012 at 12:06 AM, Hugo <Nabble> <[email protected]> wrote:\n>>\n>> Hi everyone,\n>>\n>> We have a production database (postgresql 9.0) with more than 20,000\n>> schemas\n>> and 40Gb size. In the past we had all that information in just one schema\n>> and pg_dump used to work just fine (2-3 hours to dump everything). Then we\n>> decided to split the database into schemas, which makes a lot of sense for\n>> the kind of information we store and the plans we have for the future. The\n>> problem now is that pg_dump takes forever to finish (more than 24 hours)\n>> and\n>> we just can't have consistent daily backups like we had in the past. When\n>> I\n>> try to dump just one schema with almost nothing in it, it takes 12\n>> minutes.\n\nSorry, your original did not show up here, so I'm piggy-backing on\nCraig's reply.\n\nIs dumping just one schema out of thousands an actual use case, or is\nit just an attempt to find a faster way to dump all the schemata\nthrough a back door?\n\npg_dump itself seems to have a lot of quadratic portions (plus another\none on the server which it hits pretty heavily), and it hard to know\nwhere to start addressing them. It seems like addressing the overall\nquadratic nature might be a globally better option, but addressing\njust the problem with dumping one schema might be easier to kluge\ntogether.\n\n>> When I try to dump a big schema with lots of information, it takes 14\n>> minutes. So pg_dump is clearly lost in the middle of so many schemas. The\n>> load on the machine is low (it is a hot standby replica db) and we have\n>> good\n>> configurations for memory, cache, shared_buffers and everything else. The\n>> performance of the database itself is good, it is only pg_dump that is\n>> inefficient for the task. I have found an old discussion back in 2007 that\n>> seems to be quite related to this problem:\n>>\n>>\n>> http://postgresql.1045698.n5.nabble.com/5-minutes-to-pg-dump-nothing-tp1888814.html\n>>\n>> It seems that pg_dump hasn't been tested with a huge number of schemas\n>> like\n>> that. Does anyone have a solution or suggestions? Do you know if there are\n>> patches specific for this case?\n>\n>\n> How many total relations do you have? I don't know if there is a limit to\n> the number of schemas, but I suspect when you went from one schema to 20,000\n> schemas, you also went from N relations to 20000*N relations.\n\nYes, that might be important to know--whether the total number of\nrelations changed, or just their distribution amongst the schemata.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 24 May 2012 20:20:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 24, 2012 at 08:20:34PM -0700, Jeff Janes wrote:\n> On Thu, May 24, 2012 at 8:21 AM, Craig James <[email protected]> wrote:\n> >\n> >\n> > On Thu, May 24, 2012 at 12:06 AM, Hugo <Nabble> <[email protected]> wrote:\n> >>\n> >> Hi everyone,\n> >>\n> >> We have a production database (postgresql 9.0) with more than 20,000\n> >> schemas\n> >> and 40Gb size. In the past we had all that information in just one schema\n> >> and pg_dump used to work just fine (2-3 hours to dump everything). Then we\n> >> decided to split the database into schemas, which makes a lot of sense for\n> >> the kind of information we store and the plans we have for the future. The\n> >> problem now is that pg_dump takes forever to finish (more than 24 hours)\n> >> and\n> >> we just can't have consistent daily backups like we had in the past. When\n> >> I\n> >> try to dump just one schema with almost nothing in it, it takes 12\n> >> minutes.\n> \n> Sorry, your original did not show up here, so I'm piggy-backing on\n> Craig's reply.\n> \n> Is dumping just one schema out of thousands an actual use case, or is\n> it just an attempt to find a faster way to dump all the schemata\n> through a back door?\n> \n> pg_dump itself seems to have a lot of quadratic portions (plus another\n> one on the server which it hits pretty heavily), and it hard to know\n> where to start addressing them. It seems like addressing the overall\n> quadratic nature might be a globally better option, but addressing\n> just the problem with dumping one schema might be easier to kluge\n> together.\n\nPostgres 9.2 will have some speedups for pg_dump scanning large\ndatabases --- that might help.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 24 May 2012 23:54:55 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Thanks for the replies. The number of relations in the database is really\nhigh (~500,000) and I don't think we can shrink that. The truth is that\nschemas bring a lot of advantages to our system and postgresql doesn't show\nsigns of stress with them. So I believe it should also be possible for\npg_dump to handle them with the same elegance.\n\nDumping just one schema out of thousands was indeed an attempt to find a\nfaster way to backup the database. I don't mind creating a shell script or\nprogram that dumps every schema individually as long as each dump is fast\nenough to keep the total time within a few hours. But since each dump\ncurrently takes at least 12 minutes, that just doesn't work. I have been\nlooking at the source of pg_dump in order to find possible improvements, but\nthis will certainly take days or even weeks. We will probably have to use\n'tar' to compress the postgresql folder as the backup solution for now until\nwe can fix pg_dump or wait for postgresql 9.2 to become the official version\n(as long as I don't need a dump and restore to upgrade the db).\n\nIf anyone has more suggestions, I would like to hear them. Thank you!\n\nRegards,\nHugo\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5709975.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 24 May 2012 21:54:05 -0700 (PDT)",
"msg_from": "\"Hugo <Nabble>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Hi,\n\nOn 25 May 2012 14:54, Hugo <Nabble> <[email protected]> wrote:\n> Thanks for the replies. The number of relations in the database is really\n> high (~500,000) and I don't think we can shrink that. The truth is that\n> schemas bring a lot of advantages to our system and postgresql doesn't show\n> signs of stress with them. So I believe it should also be possible for\n> pg_dump to handle them with the same elegance.\n>\n> If anyone has more suggestions, I would like to hear them. Thank you!\n\nMaybe filesystem level backup could solve this issue:\nhttp://www.postgresql.org/docs/9.1/static/continuous-archiving.html#BACKUP-BASE-BACKUP\n\nbut keep in mind that:\n- it preserves bloat in your database thus backup might need more space\n- you can't restore to different PG version\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Fri, 25 May 2012 15:12:43 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "\"Hugo <Nabble>\" <[email protected]> writes:\n> If anyone has more suggestions, I would like to hear them. Thank you!\n\nProvide a test case?\n\nWe recently fixed a couple of O(N^2) loops in pg_dump, but those covered\nextremely specific cases that might or might not have anything to do\nwith what you're seeing. The complainant was extremely helpful about\ntracking down the problems:\nhttp://archives.postgresql.org/pgsql-general/2012-03/msg00957.php\nhttp://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php\nhttp://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2012 10:41:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "On Fri, May 25, 2012 at 10:41:23AM -0400, Tom Lane wrote:\n> \"Hugo <Nabble>\" <[email protected]> writes:\n> > If anyone has more suggestions, I would like to hear them. Thank you!\n> \n> Provide a test case?\n> \n> We recently fixed a couple of O(N^2) loops in pg_dump, but those covered\n> extremely specific cases that might or might not have anything to do\n> with what you're seeing. The complainant was extremely helpful about\n> tracking down the problems:\n> http://archives.postgresql.org/pgsql-general/2012-03/msg00957.php\n> http://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php\n> http://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php\n\nYes, please help us improve this! At this point pg_upgrade is limited\nby the time to dump/restore the database schema, but I can't get users\nto give me any way to debug the speed problems.\n\nSomeone reported pg_upgrade took 45 minutes because of pg_dumpall\n--schema, which is quite long.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Fri, 25 May 2012 11:18:30 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 24, 2012 at 8:54 PM, Bruce Momjian <[email protected]> wrote:\n> On Thu, May 24, 2012 at 08:20:34PM -0700, Jeff Janes wrote:\n\n>> pg_dump itself seems to have a lot of quadratic portions (plus another\n>> one on the server which it hits pretty heavily), and it hard to know\n>> where to start addressing them. It seems like addressing the overall\n>> quadratic nature might be a globally better option, but addressing\n>> just the problem with dumping one schema might be easier to kluge\n>> together.\n>\n> Postgres 9.2 will have some speedups for pg_dump scanning large\n> databases --- that might help.\n\nThose speed ups don't seem to apply here, though. I get the same\nperformance in 9.0.7 as 9.2.beta1.\n\nThere is an operation in pg_dump which is O(#_of_schemata_in_db *\n#_of_table_in_db), or something like that.\n\nThe attached very crude patch reduces that to\nO(log_of_#_of_schemata_in_db * #_of_table_in_db)\n\nI was hoping this would be a general improvement. It doesn't seem be.\n But it is a very substantial improvement in the specific case of\ndumping one small schema out of a very large database.\n\nIt seems like dumping one schema would be better optimized by not\nloading up the entire database catalog, but rather by restricting to\njust that schema at the catalog stage. But I haven't dug into those\ndetails.\n\nFor dumping entire databases, It looks like the biggest problem is\ngoing to be LockReassignCurrentOwner in the server. And that doesn't\nseem to be easy to fix, as any change to it to improve pg_dump will\nrisk degrading normal use cases.\n\nIf we want to be able to efficiently dump entire databases in a\nscalable way, it seems like there should be some way to obtain a\ndata-base-wide AccessShare lock, which blocks AccessExclusive locks on\nany object in the database, and turns ordinary object-level\nAccessShare lock requests into no-ops. I don't think you can get\nhundreds of thousands of simultaneously held and individually recorded\nAccessShare locks without causing bad things to happen.\n\nCheers,\n\nJeff",
"msg_date": "Fri, 25 May 2012 08:40:04 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Fri, May 25, 2012 at 8:18 AM, Bruce Momjian <[email protected]> wrote:\n> On Fri, May 25, 2012 at 10:41:23AM -0400, Tom Lane wrote:\n>> \"Hugo <Nabble>\" <[email protected]> writes:\n>> > If anyone has more suggestions, I would like to hear them. Thank you!\n>>\n>> Provide a test case?\n>>\n>> We recently fixed a couple of O(N^2) loops in pg_dump, but those covered\n>> extremely specific cases that might or might not have anything to do\n>> with what you're seeing. The complainant was extremely helpful about\n>> tracking down the problems:\n>> http://archives.postgresql.org/pgsql-general/2012-03/msg00957.php\n>> http://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php\n>> http://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php\n>\n> Yes, please help us improve this! At this point pg_upgrade is limited\n> by the time to dump/restore the database schema, but I can't get users\n> to give me any way to debug the speed problems.\n\nFor dumping one small schema from a large database, look at the time\nprogression of this:\n\ndropdb foo; createdb foo;\n\nfor f in `seq 0 10000 1000000`; do\n perl -le 'print \"create schema foo$_; create table foo$_.foo (k\ninteger, v integer);\"\n foreach $ARGV[0]..$ARGV[0]+9999' $f | psql -d foo > /dev/null ;\n time pg_dump foo -Fc -n foo1 | wc -c;\ndone >& dump_one_schema_timing\n\nTo show the overall dump speed problem, drop the \"-n foo1\", and change\nthe step size from 10000/9999 down to 1000/999\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 25 May 2012 08:53:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> There is an operation in pg_dump which is O(#_of_schemata_in_db *\n> #_of_table_in_db), or something like that.\n> The attached very crude patch reduces that to\n> O(log_of_#_of_schemata_in_db * #_of_table_in_db)\n\n> I was hoping this would be a general improvement. It doesn't seem be.\n> But it is a very substantial improvement in the specific case of\n> dumping one small schema out of a very large database.\n\nYour test case in\n<CAMkU=1zedM4VyLVyLuVmoekUnUXkXfnGPer+3bvPm-A_9CNYSA@mail.gmail.com>\nshows pretty conclusively that findNamespace is a time sink for large\nnumbers of schemas, so that seems worth fixing. I don't like this\npatch though: we already have infrastructure for this in pg_dump,\nnamely buildIndexArray/findObjectByOid, so what we should do is use\nthat not invent something new. I will go see about doing that.\n\n> It seems like dumping one schema would be better optimized by not\n> loading up the entire database catalog, but rather by restricting to\n> just that schema at the catalog stage.\n\nThe reason pg_dump is not built that way is that considerations like\ndump order dependencies are not going to work at all if it only looks\nat a subset of the database. Of course, dependency chains involving\nobjects not dumped might be problematic anyway, but I'd still want it\nto do the best it could.\n\n> For dumping entire databases, It looks like the biggest problem is\n> going to be LockReassignCurrentOwner in the server. And that doesn't\n> seem to be easy to fix, as any change to it to improve pg_dump will\n> risk degrading normal use cases.\n\nI didn't try profiling the server side, but pg_dump doesn't use\nsubtransactions so it's not clear to me why LockReassignCurrentOwner\nwould get called at all ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2012 12:56:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "On Fri, May 25, 2012 at 9:56 AM, Tom Lane <[email protected]> wrote:\n> Jeff Janes <[email protected]> writes:\n>\n>> For dumping entire databases, It looks like the biggest problem is\n>> going to be LockReassignCurrentOwner in the server. And that doesn't\n>> seem to be easy to fix, as any change to it to improve pg_dump will\n>> risk degrading normal use cases.\n>\n> I didn't try profiling the server side, but pg_dump doesn't use\n> subtransactions so it's not clear to me why LockReassignCurrentOwner\n> would get called at all ...\n\nI thought that every select statement in a repeatable read transaction\nran in a separate \"portal\", and that a portal is a flavor of\nsubtransaction. Anyway, it does show up at the top of a profile of\nthe server, so it is getting called somehow.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 25 May 2012 10:41:19 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> For dumping entire databases, It looks like the biggest problem is\n> going to be LockReassignCurrentOwner in the server. And that doesn't\n> seem to be easy to fix, as any change to it to improve pg_dump will\n> risk degrading normal use cases.\n\n> If we want to be able to efficiently dump entire databases in a\n> scalable way, it seems like there should be some way to obtain a\n> data-base-wide AccessShare lock, which blocks AccessExclusive locks on\n> any object in the database, and turns ordinary object-level\n> AccessShare lock requests into no-ops.\n\nI thought a little bit about that, but it seems fairly unworkable.\nIn the first place, pg_dump doesn't necessarily want lock on every table\nin the database. In the second, such a lock mechanism would have\nlogical difficulties, notably whether it would be considered to apply to\ntables created after the lock request occurs. If it does, then it would\neffectively block all such creations (since creation takes exclusive\nlocks that ought to conflict). If it doesn't, how would you implement\nthat? In any case, we'd be adding significant cost and complexity to\nlock acquisition operations, for something that only whole-database\npg_dump operations could conceivably make use of.\n\nAs far as the specific problem at hand goes, I think there might be a\nless invasive solution. I poked into the behavior with gdb (and you're\nright, LockReassignCurrentOwner does get called during portal drop)\nand noted that although pg_dump is indeed holding thousands of locks,\nany given statement that it issues touches only a few of them. So the\nloop in LockReassignCurrentOwner iterates over the whole lock table but\ndoes something useful at only a few entries.\n\nWe could fix things for this usage pattern with what seems to me to\nbe a pretty low-overhead method: add a fixed-size array to\nResourceOwners in which we can remember up to N LOCALLOCKs, for N around\n10 or so. Add a LOCALLOCK to that array when we add the ResourceOwner to\nthat LOCALLOCK, so long as the array hasn't overflowed. (If the array\ndoes overflow, we mark it as overflowed and stop adding entries.) Then,\nin LockReassignCurrentOwner, we only iterate over the whole hash table\nif the ResourceOwner's array has overflowed. If it hasn't, use the\narray to visit just the LOCALLOCKs that need work.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2012 16:02:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "On Fri, May 25, 2012 at 1:02 PM, Tom Lane <[email protected]> wrote:\n> Jeff Janes <[email protected]> writes:\n>> For dumping entire databases, It looks like the biggest problem is\n>> going to be LockReassignCurrentOwner in the server. And that doesn't\n>> seem to be easy to fix, as any change to it to improve pg_dump will\n>> risk degrading normal use cases.\n>\n>> If we want to be able to efficiently dump entire databases in a\n>> scalable way, it seems like there should be some way to obtain a\n>> data-base-wide AccessShare lock, which blocks AccessExclusive locks on\n>> any object in the database, and turns ordinary object-level\n>> AccessShare lock requests into no-ops.\n>\n> I thought a little bit about that, but it seems fairly unworkable.\n> In the first place, pg_dump doesn't necessarily want lock on every table\n> in the database.\n\nThe database-wide method could be invoked only when there are no\noptions given to pg_dump that limit to a subset. Or does that not\nresolve the objection?\n\n> In the second, such a lock mechanism would have\n> logical difficulties, notably whether it would be considered to apply to\n> tables created after the lock request occurs. If it does, then it would\n> effectively block all such creations (since creation takes exclusive\n> locks that ought to conflict).\n\nThat seems acceptable to me. With unrestricted dump, almost all other\nDDL is locked out already, I don't know that locking out one more\nthing is that big a deal. Especially if there is some way to\ncircumvent the use of that feature.\n\n> If it doesn't, how would you implement\n> that? In any case, we'd be adding significant cost and complexity to\n> lock acquisition operations, for something that only whole-database\n> pg_dump operations could conceivably make use of.\n\nBefore Robert's fast-path locks were developed, I wanted a way to put\nthe server into 'stable schema' mode where AccessExclusive locks were\nforbidden and AccessShared were no-ops, just for performance reasons.\nNow with fast-path, that might no longer be a meaningful feature.\n\nIf databases scale out a lot, won't max_locks_per_transaction, and the\namount of shared memory it would require to keep increasing it, become\na substantial problem?\n\n> As far as the specific problem at hand goes, I think there might be a\n> less invasive solution. I poked into the behavior with gdb (and you're\n> right, LockReassignCurrentOwner does get called during portal drop)\n> and noted that although pg_dump is indeed holding thousands of locks,\n> any given statement that it issues touches only a few of them. So the\n> loop in LockReassignCurrentOwner iterates over the whole lock table but\n> does something useful at only a few entries.\n>\n> We could fix things for this usage pattern with what seems to me to\n> be a pretty low-overhead method: add a fixed-size array to\n> ResourceOwners in which we can remember up to N LOCALLOCKs, for N around\n> 10 or so.\n\nI had thought along these terms too. I think 10 would capture most of\nthe gain. with pg_dump, so far I see a huge number of resource owners\nwith maximum number of locks being 0, 2 or 4, and only a handful with\nmore than 4. Of course I haven't looked at all use cases.\n\nThe reason we want to limit at all is not memory, but rather so that\nexplicitly removing locks doesn't have to dig through a large list to\nfind the specific one to remove, therefore become quadratic in the\ncase that many locks are explicitly removed, right? Does anyone ever\nadd a bunch of locks, and then afterward go through and explicitly\nremove them all in FIFO order? I think most users would either remove\nthem LIFO, or drop them in bulk. But better safe than sorry.\n\n> Add a LOCALLOCK to that array when we add the ResourceOwner to\n> that LOCALLOCK, so long as the array hasn't overflowed. (If the array\n> does overflow, we mark it as overflowed and stop adding entries.) Then,\n> in LockReassignCurrentOwner, we only iterate over the whole hash table\n> if the ResourceOwner's array has overflowed. If it hasn't, use the\n> array to visit just the LOCALLOCKs that need work.\n>\n> Comments?\n\nI have some basic parts of this already coded up. I can try to finish\ncoding this up for CF next or next+1. I'm not yet sure how to avoid\nweakening the boundary between resowner.c and lock.c, my original code\nwas pretty ugly there, as it was just a proof of concept.\n\nWhat would be a situation that might be adversely affected by the\noverhead of such a change? I think pgbench -S except implemented in a\nplpgsql loop would probably do it.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 25 May 2012 14:28:02 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "> \"Hugo <Nabble>\" <[email protected]> writes:\n>> If anyone has more suggestions, I would like to hear them. Thank you!\n> \n> Provide a test case?\n> \n> We recently fixed a couple of O(N^2) loops in pg_dump, but those covered\n> extremely specific cases that might or might not have anything to do\n> with what you're seeing. The complainant was extremely helpful about\n> tracking down the problems:\n> http://archives.postgresql.org/pgsql-general/2012-03/msg00957.php\n> http://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php\n> http://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php\n\nI'm wondering if these fixes (or today's commit) include the case for\na database has ~100 thounsands of tables, indexes. One of my customers\nhas had troubles with pg_dump for the database, it takes over 10\nhours.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Sat, 26 May 2012 10:18:40 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "Here is a sample dump that takes a long time to be written by pg_dump:\nhttp://postgresql.1045698.n5.nabble.com/file/n5710183/test.dump.tar.gz\ntest.dump.tar.gz \n(the file above has 2.4Mb, the dump itself has 66Mb)\n\nThis database has 2,311 schemas similar to those in my production database.\nAll schemas are empty, but pg_dump still takes 3 hours to finish it on my\ncomputer. So now you can imagine my production database with more than\n20,000 schemas like that. Can you guys take a look and see if the code has\nroom for improvements? I generated this dump with postgresql 9.1 (which is\nwhat I have on my local computer), but my production database uses\npostgresql 9.0. So it would be great if improvements could be delivered to\nversion 9.0 as well.\n\nThanks a lot for all the help!\n\nHugo\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5710183.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 26 May 2012 21:12:13 -0700 (PDT)",
"msg_from": "\"Hugo <Nabble>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Sat, May 26, 2012 at 9:12 PM, Hugo <Nabble> <[email protected]> wrote:\n> Here is a sample dump that takes a long time to be written by pg_dump:\n> http://postgresql.1045698.n5.nabble.com/file/n5710183/test.dump.tar.gz\n> test.dump.tar.gz\n> (the file above has 2.4Mb, the dump itself has 66Mb)\n>\n> This database has 2,311 schemas similar to those in my production database.\n> All schemas are empty,\n\nThis dump does not reload cleanly. It uses many roles which it\ndoesn't create. Also, the schemata are not empty, they have about 20\ntables apiece.\n\nI created the missing roles with all default options.\n\nDoing a default pg_dump took 66 minutes.\n\n> but pg_dump still takes 3 hours to finish it on my\n> computer. So now you can imagine my production database with more than\n> 20,000 schemas like that. Can you guys take a look and see if the code has\n> room for improvements?\n\nThere is a quadratic behavior in pg_dump's \"mark_create_done\". This\nshould probably be fixed, but in the mean time it can be circumvented\nby using -Fc rather than -Fp for the dump format. Doing that removed\n17 minutes from the run time.\n\nI'm working on a patch to reduce the LockReassignCurrentOwner problem\nin the server when using pg_dump with lots of objects. Using a\npreliminary version for this, in conjunction with -Fc, reduced the\ndump time to 3.5 minutes.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 28 May 2012 14:24:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> There is a quadratic behavior in pg_dump's \"mark_create_done\". This\n> should probably be fixed, but in the mean time it can be circumvented\n> by using -Fc rather than -Fp for the dump format. Doing that removed\n> 17 minutes from the run time.\n\nHmm, that would just amount to postponing the work from pg_dump to\npg_restore --- although I suppose it could be a win if the dump is for\nbackup purposes and you probably won't ever have to restore it.\ninhibit_data_for_failed_table() has the same issue, though perhaps it's\nless likely to be exercised; and there is a previously noted O(N^2)\nbehavior for the loop around repoint_table_dependencies.\n\nWe could fix these things by setting up index arrays that map dump ID\nto TocEntry pointer and dump ID of a table to dump ID of its TABLE DATA\nTocEntry. The first of these already exists (tocsByDumpId) but is\ncurrently built only if doing parallel restore. We'd have to build it\nall the time to use it for fixing mark_create_done. Still, the extra\nspace is small compared to the size of the TocEntry data structures,\nso I don't see that that's a serious objection.\n\nI have nothing else to do right now so am a bit tempted to go fix this.\n\n> I'm working on a patch to reduce the LockReassignCurrentOwner problem\n> in the server when using pg_dump with lots of objects.\n\nCool.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 May 2012 18:26:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "Thanks again for the hard work, guys.\n\nWhen I said that the schemas were empty, I was talking about data, not\ntables. So you are right that each schema has ~20 tables (plus indices,\nsequences, etc.), but pretty much no data (maybe one or two rows at most).\nData doesn't seem to be so important in this case (I may be wrong though),\nso the sample database should be enough to find the weak spots that need\nattention.\n\n> but in the mean time it can be circumvented \n> by using -Fc rather than -Fp for the dump format.\n> Doing that removed 17 minutes from the run time. \n\nWe do use -Fc in our production server, but it doesn't help much (dump time\nstill > 24 hours). Actually, I tried several different dump options without\nsuccess. It seems that you guys are very close to great improvements here.\nThanks for everything!\n\nBest,\nHugo\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5710341.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 28 May 2012 22:21:03 -0700 (PDT)",
"msg_from": "\"Hugo <Nabble>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": ">> We recently fixed a couple of O(N^2) loops in pg_dump, but those covered\n>> extremely specific cases that might or might not have anything to do\n>> with what you're seeing. The complainant was extremely helpful about\n>> tracking down the problems:\n>> http://archives.postgresql.org/pgsql-general/2012-03/msg00957.php\n>> http://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php\n>> http://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php\n> \n> I'm wondering if these fixes (or today's commit) include the case for\n> a database has ~100 thounsands of tables, indexes. One of my customers\n> has had troubles with pg_dump for the database, it takes over 10\n> hours.\n\nSo I did qucik test with old PostgreSQL 9.0.2 and current (as of\ncommit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed\ndatabase I created 100,000 tables, and each has two integer\nattributes, one of them is a primary key. Creating tables were\nresonably fast as expected (18-20 minutes). This created a 1.4GB\ndatabase cluster.\n\npg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty\nlong time as the customer complained. Now what was current? Well it\ntook 125 minutes. Ps showed that most of time was spent in backend.\n\nBelow is the script to create tables.\n\ncnt=100000\nwhile [ $cnt -gt 0 ]\ndo\npsql -e -p 5432 -c \"create table t$cnt(i int primary key, j int);\" test\ncnt=`expr $cnt - 1`\ndone\n\np.s. You need to increate max_locks_per_transaction before running\npg_dump (I raised to 640 in my case).\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Tue, 29 May 2012 18:51:49 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> So I did qucik test with old PostgreSQL 9.0.2 and current (as of\n> commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed\n> database I created 100,000 tables, and each has two integer\n> attributes, one of them is a primary key. Creating tables were\n> resonably fast as expected (18-20 minutes). This created a 1.4GB\n> database cluster.\n\n> pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty\n> long time as the customer complained. Now what was current? Well it\n> took 125 minutes. Ps showed that most of time was spent in backend.\n\nYeah, Jeff's experiments indicated that the remaining bottleneck is lock\nmanagement in the server. What I fixed so far on the pg_dump side\nshould be enough to let partial dumps run at reasonable speed even if\nthe whole database contains many tables. But if psql is taking\nAccessShareLock on lots of tables, there's still a problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 May 2012 11:52:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n> management in the server. What I fixed so far on the pg_dump side\n> should be enough to let partial dumps run at reasonable speed even if\n> the whole database contains many tables. But if psql is taking\n> AccessShareLock on lots of tables, there's still a problem.\n\nYes, I saw this kind of lines:\n\n29260 2012-05-30 09:39:19 JST LOG: statement: LOCK TABLE public.t10 IN ACCESS SHARE MODE\n\nIt seems this is not very efficient query since LOCK TABLE can take\nmultiple tables as an argument and we could pass as many tables as\npossible to one LOCK TABLE query. This way we could reduce the\ncommunication between pg_dump and backend.\n\nAlso I noticed lots of queries like these:\n\n29260 2012-05-30 09:39:19 JST LOG: statement: SELECT attname, attacl FROM pg_catalog.pg_attribute WHERE attrelid = '516391' AND NOT attisdropped AND attacl IS NOT NULL ORDER BY attnum\n\nI guess this is for each table and if there are tones of tables these\nqueries are major bottle neck as well as LOCK. I think we could\noptimize somewhat this in that we issue queries to extract info of\nmultiple tables rather than extracting only one table inof as current\nimplementation does.\n\nOr even better we could create a temp table which contains target\ntable oids to join the query above.\n\nIn my opinion, particular use case such as multi tenancy would create\ntons of objects in a database cluster and the performance of pg_dump\nmight be highlighted more in the future.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Wed, 30 May 2012 09:58:16 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n> management in the server. What I fixed so far on the pg_dump side\n> should be enough to let partial dumps run at reasonable speed even if\n> the whole database contains many tables. But if psql is taking\n> AccessShareLock on lots of tables, there's still a problem.\n\nOk, I modified the part of pg_dump where tremendous number of LOCK\nTABLE are issued. I replace them with single LOCK TABLE with multiple\ntables. With 100k tables LOCK statements took 13 minutes in total, now\nit only takes 3 seconds. Comments?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp",
"msg_date": "Wed, 30 May 2012 18:06:20 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": ">> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n>> management in the server. What I fixed so far on the pg_dump side\n>> should be enough to let partial dumps run at reasonable speed even if\n>> the whole database contains many tables. But if psql is taking\n>> AccessShareLock on lots of tables, there's still a problem.\n> \n> Ok, I modified the part of pg_dump where tremendous number of LOCK\n> TABLE are issued. I replace them with single LOCK TABLE with multiple\n> tables. With 100k tables LOCK statements took 13 minutes in total, now\n> it only takes 3 seconds. Comments?\n\nShall I commit to master and all supported branches?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Thu, 31 May 2012 09:20:43 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>> it only takes 3 seconds. Comments?\n\n> Shall I commit to master and all supported branches?\n\nI'm not excited by this patch. It dodges the O(N^2) lock behavior for\nthe initial phase of acquiring the locks, but it does nothing for the\nlock-related slowdown occurring in all pg_dump's subsequent commands.\nI think we really need to get in the server-side fix that Jeff Janes is\nworking on, and then re-measure to see if something like this is still\nworth the trouble. I am also a tad concerned about whether we might not\nhave problems with parsing memory usage, or some such, with thousands of\ntables being listed in a single command.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 May 2012 23:54:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "* Tom Lane ([email protected]) wrote:\n> Tatsuo Ishii <[email protected]> writes:\n> > Shall I commit to master and all supported branches?\n> \n> I'm not excited by this patch. It dodges the O(N^2) lock behavior for\n> the initial phase of acquiring the locks, but it does nothing for the\n> lock-related slowdown occurring in all pg_dump's subsequent commands.\n> I think we really need to get in the server-side fix that Jeff Janes is\n> working on, and then re-measure to see if something like this is still\n> worth the trouble. I am also a tad concerned about whether we might not\n> have problems with parsing memory usage, or some such, with thousands of\n> tables being listed in a single command.\n\nI can't imagine a case where it's actually better to incur the latency\npenalty (which is apparently on the order of *minutes* of additional\ntime here..) than to worry about the potential memory usage of having to\nparse such a command.\n\nIf that's really a concern, where is that threshold, and could we simply\ncap pg_dump's operations based on it? Is 1000 alright? Doing a 'lock'\nw/ 1000 tables at a time is still going to be hugely better than doing\nthem individually and the amount of gain between every-1000 and\nall-at-once is likely to be pretty minimal anyway...\n\nThe current situation where the client-to-server latency accounts for\nmultiple minutes of time is just ridiculous, however, so I feel we need\nsome form of this patch, even if the server side is magically made much\nfaster. The constant back-and-forth isn't cheap.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 31 May 2012 00:01:49 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> The current situation where the client-to-server latency accounts for\n> multiple minutes of time is just ridiculous, however, so I feel we need\n> some form of this patch, even if the server side is magically made much\n> faster. The constant back-and-forth isn't cheap.\n\nNo, you're missing my point. I don't believe that client-to-server\nlatency, or any other O(N) cost, has anything to do with the problem\nhere. The problem, as Jeff has demonstrated, is the O(N^2) costs\nassociated with management of the local lock table. It is utterly\npointless to worry about O(N) costs until that's fixed; and it's just\nwrong to claim that you've created a significant speedup by eliminating\na constant factor when all you've done is staved off occurrences of the\nO(N^2) problem.\n\nOnce we've gotten rid of the local lock table problem, we can re-measure\nand see what the true benefit of this patch is. I'm of the opinion\nthat it will be in the noise compared to the overall runtime of pg_dump.\nI could be wrong, but you won't convince me of that with measurements\ntaken while the local lock table problem is still there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2012 00:18:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "> I'm not excited by this patch. It dodges the O(N^2) lock behavior for\n> the initial phase of acquiring the locks, but it does nothing for the\n> lock-related slowdown occurring in all pg_dump's subsequent commands.\n> I think we really need to get in the server-side fix that Jeff Janes is\n> working on, and then re-measure to see if something like this is still\n> worth the trouble.\n\nWell, even with current backend, locking 100,000 tables has been done\nin 3 seconds in my test. So even if Jeff Janes's fix is succeeded, I\nguess it will just save 3 seconds in my case. and if number of tables\nis smaller, the saving will smaller. This suggests that most of time\nfor processing LOCK has been spent in communication between pg_dump\nand backend. Of course this is just my guess, though.\n\n> I am also a tad concerned about whether we might not\n> have problems with parsing memory usage, or some such, with thousands of\n> tables being listed in a single command.\n\nThat's easy to fix. Just divide each LOCK statements into multiple\nLOCK statements.\n\nMy big concern is, even if the locking part is fixed (either by Jeff\nJane's fix or by me) still much time in pg_dump is spent for SELECTs\nagainst system catalogs. The fix will be turn many SELECTs into single\nSELECT, probably using big IN clause for tables oids.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Thu, 31 May 2012 14:29:01 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": ">>> We recently fixed a couple of O(N^2) loops in pg_dump, but those covered\n>>> extremely specific cases that might or might not have anything to do\n>>> with what you're seeing. The complainant was extremely helpful about\n>>> tracking down the problems:\n>>> http://archives.postgresql.org/pgsql-general/2012-03/msg00957.php\n>>> http://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php\n>>> http://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php\n>> \n>> I'm wondering if these fixes (or today's commit) include the case for\n>> a database has ~100 thounsands of tables, indexes. One of my customers\n>> has had troubles with pg_dump for the database, it takes over 10\n>> hours.\n> \n> So I did qucik test with old PostgreSQL 9.0.2 and current (as of\n> commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed\n> database I created 100,000 tables, and each has two integer\n> attributes, one of them is a primary key. Creating tables were\n> resonably fast as expected (18-20 minutes). This created a 1.4GB\n> database cluster.\n> \n> pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty\n> long time as the customer complained. Now what was current? Well it\n> took 125 minutes. Ps showed that most of time was spent in backend.\n> \n> Below is the script to create tables.\n> \n> cnt=100000\n> while [ $cnt -gt 0 ]\n> do\n> psql -e -p 5432 -c \"create table t$cnt(i int primary key, j int);\" test\n> cnt=`expr $cnt - 1`\n> done\n> \n> p.s. You need to increate max_locks_per_transaction before running\n> pg_dump (I raised to 640 in my case).\n\nJust for record, I rerun the test again with my single-LOCK patch, and\nnow total runtime of pg_dump is 113 minutes.\n188 minutes(9.0)->125 minutes(git master)->113 minutes(with my patch).\n\nSo far, I'm glad to see 40% time savings at this point.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Thu, 31 May 2012 17:45:26 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas "
},
{
"msg_contents": "On Thu, May 31, 2012 at 10:45 AM, Tatsuo Ishii <[email protected]> wrote:\n> Just for record, I rerun the test again with my single-LOCK patch, and\n> now total runtime of pg_dump is 113 minutes.\n> 188 minutes(9.0)->125 minutes(git master)->113 minutes(with my patch).\n>\n> So far, I'm glad to see 40% time savings at this point.\n\nI see only 9.6% savings (100 * (113/125 - 1)). What am I missing?\n\nCheers\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 31 May 2012 15:38:35 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "> On Thu, May 31, 2012 at 10:45 AM, Tatsuo Ishii <[email protected]> wrote:\n>> Just for record, I rerun the test again with my single-LOCK patch, and\n>> now total runtime of pg_dump is 113 minutes.\n>> 188 minutes(9.0)->125 minutes(git master)->113 minutes(with my patch).\n>>\n>> So far, I'm glad to see 40% time savings at this point.\n> \n> I see only 9.6% savings (100 * (113/125 - 1)). What am I missing?\n\nWhat I meant was (100 * (113/188 - 1)).\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Thu, 31 May 2012 23:07:57 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 31, 2012 at 4:07 PM, Tatsuo Ishii <[email protected]> wrote:\n>> On Thu, May 31, 2012 at 10:45 AM, Tatsuo Ishii <[email protected]> wrote:\n>>> Just for record, I rerun the test again with my single-LOCK patch, and\n>>> now total runtime of pg_dump is 113 minutes.\n>>> 188 minutes(9.0)->125 minutes(git master)->113 minutes(with my patch).\n>>>\n>>> So far, I'm glad to see 40% time savings at this point.\n>>\n>> I see only 9.6% savings (100 * (113/125 - 1)). What am I missing?\n>\n> What I meant was (100 * (113/188 - 1)).\n\nOK, my fault was to assume you wanted to measure only your part, while\napparently you meant overall savings. But Tom had asked for separate\nmeasurements if I understood him correctly. Also, that measurement of\nyour change would go after the O(N^2) fix. It could actually turn out\nto be much more than 9% because the overall time would be reduced even\nmore dramatic. So it might actually be good for your fix to wait a\nbit. ;-)\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 31 May 2012 16:17:11 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 31, 2012 at 11:17 AM, Robert Klemme\n<[email protected]> wrote:\n>\n> OK, my fault was to assume you wanted to measure only your part, while\n> apparently you meant overall savings. But Tom had asked for separate\n> measurements if I understood him correctly. Also, that measurement of\n> your change would go after the O(N^2) fix. It could actually turn out\n> to be much more than 9% because the overall time would be reduced even\n> more dramatic. So it might actually be good for your fix to wait a\n> bit. ;-)\n\nIt's not clear whether Tom is already working on that O(N^2) fix in locking.\n\nI'm asking because it doesn't seem like a complicated patch,\ncontributors may want to get working if not ;-)\n",
"msg_date": "Thu, 31 May 2012 11:22:08 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> It's not clear whether Tom is already working on that O(N^2) fix in locking.\n\nI'm not; Jeff Janes is. But you shouldn't be holding your breath\nanyway, since it's 9.3 material at this point.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2012 10:31:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas "
},
{
"msg_contents": "On Thu, May 31, 2012 at 10:31 AM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> It's not clear whether Tom is already working on that O(N^2) fix in locking.\n>\n> I'm not; Jeff Janes is. But you shouldn't be holding your breath\n> anyway, since it's 9.3 material at this point.\n\nI agree we can't back-patch that change, but then I think we ought to\nconsider back-patching some variant of Tatsuo's patch. Maybe it's not\nreasonable to thunk an arbitrary number of relation names in there on\none line, but how about 1000 relations per LOCK statement or so? I\nguess we'd need to see how much that erodes the benefit, but we've\ncertainly done back-branch rearrangements in pg_dump in the past to\nfix various kinds of issues, and this is pretty non-invasive.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 31 May 2012 10:41:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, May 31, 2012 at 10:31 AM, Tom Lane <[email protected]> wrote:\n>> I'm not; Jeff Janes is. �But you shouldn't be holding your breath\n>> anyway, since it's 9.3 material at this point.\n\n> I agree we can't back-patch that change, but then I think we ought to\n> consider back-patching some variant of Tatsuo's patch. Maybe it's not\n> reasonable to thunk an arbitrary number of relation names in there on\n> one line, but how about 1000 relations per LOCK statement or so? I\n> guess we'd need to see how much that erodes the benefit, but we've\n> certainly done back-branch rearrangements in pg_dump in the past to\n> fix various kinds of issues, and this is pretty non-invasive.\n\nI am not convinced either that this patch will still be useful after\nJeff's fix goes in, or that it provides any meaningful savings when\nyou consider a complete pg_dump run. Yeah, it will make the lock\nacquisition phase faster, but that's not a big part of the runtime\nexcept in very limited scenarios (--schema-only, perhaps).\n\nThe performance patches we applied to pg_dump over the past couple weeks\nwere meant to relieve pain in situations where the big server-side\nlossage wasn't the dominant factor in runtime (ie, partial dumps).\nBut this one is targeting exactly that area, which is why it looks like\na band-aid and not a fix to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2012 10:50:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas "
},
{
"msg_contents": "On Thu, May 31, 2012 at 10:50:51AM -0400, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > On Thu, May 31, 2012 at 10:31 AM, Tom Lane <[email protected]> wrote:\n> >> I'm not; Jeff Janes is. �But you shouldn't be holding your breath\n> >> anyway, since it's 9.3 material at this point.\n> \n> > I agree we can't back-patch that change, but then I think we ought to\n> > consider back-patching some variant of Tatsuo's patch. Maybe it's not\n> > reasonable to thunk an arbitrary number of relation names in there on\n> > one line, but how about 1000 relations per LOCK statement or so? I\n> > guess we'd need to see how much that erodes the benefit, but we've\n> > certainly done back-branch rearrangements in pg_dump in the past to\n> > fix various kinds of issues, and this is pretty non-invasive.\n> \n> I am not convinced either that this patch will still be useful after\n> Jeff's fix goes in, or that it provides any meaningful savings when\n> you consider a complete pg_dump run. Yeah, it will make the lock\n> acquisition phase faster, but that's not a big part of the runtime\n> except in very limited scenarios (--schema-only, perhaps).\n\nFYI, that is the pg_upgrade use-case, and pg_dump/restore time is\nreportedly taking the majority of time in many cases.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 31 May 2012 11:00:54 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 31, 2012 at 11:50 AM, Tom Lane <[email protected]> wrote:\n> The performance patches we applied to pg_dump over the past couple weeks\n> were meant to relieve pain in situations where the big server-side\n> lossage wasn't the dominant factor in runtime (ie, partial dumps).\n> But this one is targeting exactly that area, which is why it looks like\n> a band-aid and not a fix to me.\n\nNo, Tatsuo's patch attacks a phase dominated by latency in some\nsetups. That it's also becoming slow currently because of the locking\ncost is irrelevant, with locking sped up, the patch should only\nimprove the phase even further. Imagine the current timeline:\n\n* = locking\n. = waiting\n\n*.*.**.**.***.***.****.****.*****.****\n\nTatsuo's patch converts it to:\n\n*.**************\n\nThe locking fix would turn the timeline into:\n\n*.*.*.*.*.*.*\n\nTatsuo's patch would turn that into:\n\n*******\n\nAnd, as noted before, pg_dump --schema-only is a key bottleneck in pg_upgrade.\n",
"msg_date": "Thu, 31 May 2012 12:01:42 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 31, 2012 at 10:50 AM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, May 31, 2012 at 10:31 AM, Tom Lane <[email protected]> wrote:\n>>> I'm not; Jeff Janes is. But you shouldn't be holding your breath\n>>> anyway, since it's 9.3 material at this point.\n>\n>> I agree we can't back-patch that change, but then I think we ought to\n>> consider back-patching some variant of Tatsuo's patch. Maybe it's not\n>> reasonable to thunk an arbitrary number of relation names in there on\n>> one line, but how about 1000 relations per LOCK statement or so? I\n>> guess we'd need to see how much that erodes the benefit, but we've\n>> certainly done back-branch rearrangements in pg_dump in the past to\n>> fix various kinds of issues, and this is pretty non-invasive.\n>\n> I am not convinced either that this patch will still be useful after\n> Jeff's fix goes in, ...\n\nBut people on older branches are not going to GET Jeff's fix.\n\n> or that it provides any meaningful savings when\n> you consider a complete pg_dump run. Yeah, it will make the lock\n> acquisition phase faster, but that's not a big part of the runtime\n> except in very limited scenarios (--schema-only, perhaps).\n\nThat is not a borderline scenario, as others have also pointed out.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 31 May 2012 11:04:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 31, 2012 at 11:04:12AM -0400, Robert Haas wrote:\n> On Thu, May 31, 2012 at 10:50 AM, Tom Lane <[email protected]> wrote:\n> > Robert Haas <[email protected]> writes:\n> >> On Thu, May 31, 2012 at 10:31 AM, Tom Lane <[email protected]> wrote:\n> >>> I'm not; Jeff Janes is. �But you shouldn't be holding your breath\n> >>> anyway, since it's 9.3 material at this point.\n> >\n> >> I agree we can't back-patch that change, but then I think we ought to\n> >> consider back-patching some variant of Tatsuo's patch. �Maybe it's not\n> >> reasonable to thunk an arbitrary number of relation names in there on\n> >> one line, but how about 1000 relations per LOCK statement or so? �I\n> >> guess we'd need to see how much that erodes the benefit, but we've\n> >> certainly done back-branch rearrangements in pg_dump in the past to\n> >> fix various kinds of issues, and this is pretty non-invasive.\n> >\n> > I am not convinced either that this patch will still be useful after\n> > Jeff's fix goes in, ...\n> \n> But people on older branches are not going to GET Jeff's fix.\n\nFYI, if it got into Postgres 9.2, everyone upgrading to Postgres 9.2\nwould benefit because pg_upgrade uses the new cluster's pg_dumpall.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 31 May 2012 11:06:50 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Thu, May 31, 2012 at 11:50 AM, Tom Lane <[email protected]> wrote:\n>> The performance patches we applied to pg_dump over the past couple weeks\n>> were meant to relieve pain in situations where the big server-side\n>> lossage wasn't the dominant factor in runtime (ie, partial dumps).\n>> But this one is targeting exactly that area, which is why it looks like\n>> a band-aid and not a fix to me.\n\n> No, Tatsuo's patch attacks a phase dominated by latency in some\n> setups.\n\nNo, it does not. The reason it's a win is that it avoids the O(N^2)\nbehavior in the server. Whether the bandwidth savings is worth worrying\nabout cannot be proven one way or the other as long as that elephant\nis in the room.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 31 May 2012 11:25:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas "
},
{
"msg_contents": "On Thu, May 31, 2012 at 12:25 PM, Tom Lane <[email protected]> wrote:\n>> No, Tatsuo's patch attacks a phase dominated by latency in some\n>> setups.\n>\n> No, it does not. The reason it's a win is that it avoids the O(N^2)\n> behavior in the server. Whether the bandwidth savings is worth worrying\n> about cannot be proven one way or the other as long as that elephant\n> is in the room.\n>\n> regards, tom lane\n\nI understand that, but if the locking is fixed and made to be O(N)\n(and hence each table locking O(1)), then latency suddenly becomes the\ndominating factor.\n\nI'm thinking, though, pg_upgrade runs locally, contrary to pg_dump\nbackups, so in that case latency would be negligible and Tatsuo's\npatch inconsequential.\n\nI'm also thinking, whether the ResourceOwner patch you've proposed\nwould get negated by Tatsuo's patch, because suddenly a \"portal\"\n(IIRC) has a lot more locks than ResourceOwner could accomodate,\nforcing a reversal to O(N²) behavior. In that case, that patch would\nin fact be detrimental... huh... way to go 180\n",
"msg_date": "Thu, 31 May 2012 12:33:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <[email protected]> wrote:\n>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n>> management in the server. What I fixed so far on the pg_dump side\n>> should be enough to let partial dumps run at reasonable speed even if\n>> the whole database contains many tables. But if psql is taking\n>> AccessShareLock on lots of tables, there's still a problem.\n>\n> Ok, I modified the part of pg_dump where tremendous number of LOCK\n> TABLE are issued. I replace them with single LOCK TABLE with multiple\n> tables. With 100k tables LOCK statements took 13 minutes in total, now\n> it only takes 3 seconds. Comments?\n\nCould you rebase this? I tried doing it myself, but must have messed\nit up because it got slower rather than faster.\n\nThanks,\n\nJeff\n",
"msg_date": "Sun, 10 Jun 2012 16:47:41 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <[email protected]> wrote:\n>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n>>> management in the server. What I fixed so far on the pg_dump side\n>>> should be enough to let partial dumps run at reasonable speed even if\n>>> the whole database contains many tables. But if psql is taking\n>>> AccessShareLock on lots of tables, there's still a problem.\n>>\n>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>> it only takes 3 seconds. Comments?\n>\n> Could you rebase this? I tried doing it myself, but must have messed\n> it up because it got slower rather than faster.\n\nOK, I found the problem. In fixing a merge conflict, I had it execute\nthe query every time it appended a table, rather than just at the end.\n\nWith my proposed patch in place, I find that for a full default dump\nyour patch is slightly faster with < 300,000 tables, and slightly\nslower with > 300,000. The differences are generally <2% in either\ndirection. When it comes to back-patching and partial dumps, I'm not\nreally sure what to test.\n\nFor the record, there is still a quadratic performance on the server,\nalbeit with a much smaller constant factor than the Reassign one. It\nis in get_tabstat_entry. I don't know if is worth working on that in\nisolation--if PG is going to try to accommodate 100s of thousands of\ntable, there probably needs to be a more general way to limit the\nmemory used by all aspects of the rel caches.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 11 Jun 2012 09:32:52 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes <[email protected]> wrote:\n>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <[email protected]> wrote:\n>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n>>>> management in the server. What I fixed so far on the pg_dump side\n>>>> should be enough to let partial dumps run at reasonable speed even if\n>>>> the whole database contains many tables. But if psql is taking\n>>>> AccessShareLock on lots of tables, there's still a problem.\n>>>\n>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>> it only takes 3 seconds. Comments?\n>>\n>> Could you rebase this? I tried doing it myself, but must have messed\n>> it up because it got slower rather than faster.\n> \n> OK, I found the problem. In fixing a merge conflict, I had it execute\n> the query every time it appended a table, rather than just at the end.\n> \n> With my proposed patch in place, I find that for a full default dump\n> your patch is slightly faster with < 300,000 tables, and slightly\n> slower with > 300,000. The differences are generally <2% in either\n> direction. When it comes to back-patching and partial dumps, I'm not\n> really sure what to test.\n> \n> For the record, there is still a quadratic performance on the server,\n> albeit with a much smaller constant factor than the Reassign one. It\n> is in get_tabstat_entry. I don't know if is worth working on that in\n> isolation--if PG is going to try to accommodate 100s of thousands of\n> table, there probably needs to be a more general way to limit the\n> memory used by all aspects of the rel caches.\n\nI would like to test your patch and w/without my patch. Could you\nplease give me the patches? Or do you have your own git repository?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Tue, 12 Jun 2012 17:54:25 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii <[email protected]> wrote:\n>> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes <[email protected]> wrote:\n>>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <[email protected]> wrote:\n>>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n>>>>> management in the server. What I fixed so far on the pg_dump side\n>>>>> should be enough to let partial dumps run at reasonable speed even if\n>>>>> the whole database contains many tables. But if psql is taking\n>>>>> AccessShareLock on lots of tables, there's still a problem.\n>>>>\n>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>>> it only takes 3 seconds. Comments?\n>>>\n>>> Could you rebase this? I tried doing it myself, but must have messed\n>>> it up because it got slower rather than faster.\n>>\n>> OK, I found the problem. In fixing a merge conflict, I had it execute\n>> the query every time it appended a table, rather than just at the end.\n>>\n>> With my proposed patch in place, I find that for a full default dump\n>> your patch is slightly faster with < 300,000 tables, and slightly\n>> slower with > 300,000. The differences are generally <2% in either\n>> direction. When it comes to back-patching and partial dumps, I'm not\n>> really sure what to test.\n>>\n>> For the record, there is still a quadratic performance on the server,\n>> albeit with a much smaller constant factor than the Reassign one. It\n>> is in get_tabstat_entry. I don't know if is worth working on that in\n>> isolation--if PG is going to try to accommodate 100s of thousands of\n>> table, there probably needs to be a more general way to limit the\n>> memory used by all aspects of the rel caches.\n>\n> I would like to test your patch and w/without my patch. Could you\n> please give me the patches? Or do you have your own git repository?\n\nThe main patch is in the commit fest as \"Resource Owner reassign Locks\nfor the sake of pg_dump\"\n\nMy re-basing of your patch is attached.\n\nCheers,\n\nJeff",
"msg_date": "Tue, 12 Jun 2012 08:33:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "> On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii <[email protected]> wrote:\n>>> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes <[email protected]> wrote:\n>>>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <[email protected]> wrote:\n>>>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n>>>>>> management in the server. What I fixed so far on the pg_dump side\n>>>>>> should be enough to let partial dumps run at reasonable speed even if\n>>>>>> the whole database contains many tables. But if psql is taking\n>>>>>> AccessShareLock on lots of tables, there's still a problem.\n>>>>>\n>>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>>>> it only takes 3 seconds. Comments?\n>>>>\n>>>> Could you rebase this? I tried doing it myself, but must have messed\n>>>> it up because it got slower rather than faster.\n>>>\n>>> OK, I found the problem. In fixing a merge conflict, I had it execute\n>>> the query every time it appended a table, rather than just at the end.\n>>>\n>>> With my proposed patch in place, I find that for a full default dump\n>>> your patch is slightly faster with < 300,000 tables, and slightly\n>>> slower with > 300,000. The differences are generally <2% in either\n>>> direction. When it comes to back-patching and partial dumps, I'm not\n>>> really sure what to test.\n>>>\n>>> For the record, there is still a quadratic performance on the server,\n>>> albeit with a much smaller constant factor than the Reassign one. It\n>>> is in get_tabstat_entry. I don't know if is worth working on that in\n>>> isolation--if PG is going to try to accommodate 100s of thousands of\n>>> table, there probably needs to be a more general way to limit the\n>>> memory used by all aspects of the rel caches.\n>>\n>> I would like to test your patch and w/without my patch. Could you\n>> please give me the patches? Or do you have your own git repository?\n> \n> The main patch is in the commit fest as \"Resource Owner reassign Locks\n> for the sake of pg_dump\"\n> \n> My re-basing of your patch is attached.\n\nI tested your patches with current master head. The result was pretty\ngood. Before it took 125 minutes (with 9.2 devel) to dump 100k empty\ntables and now it takes only less than 4 minutes!\n\n$ time pg_dump test >/dev/null\n\nreal\t3m56.412s\nuser\t0m12.059s\nsys\t0m3.571s\n\nGood job!\n\nNow I applied rebased pg_dump patch.\n\nreal\t4m1.779s\nuser\t0m11.621s\nsys\t0m3.052s\n\nUnfortunately I see no improvement. Probably my patch's value is for\ndumping against older backend.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Wed, 13 Jun 2012 10:45:25 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Hi guys,\n\nI just want to let you know that we have created our own solution to dump\nand restore our databases. The tool was written in java and the source is\nnow on Github (MIT license): https://github.com/tig100/JdbcPgBackup\n\nThe main approach was to cache all database objects - schemas, tables,\nindexes, etc., and instead of having postgres do joins between the\npg_catalog tables (which include thousands of schemas in pg_namespace and\nmillions of columns in pg_attribute), we do full table scans and then find\nwhich schema or table an object belongs to by looking it up in a hash map in\njava, based on schema and table oid's. The dump is not transactionally safe,\nso it should be performed on a replica db only (WAL importing disabled), not\non a live db. Some statistics:\n\nDump 11,000 schemas = 3 hours.\nDump 28,000 schemas = 8 hours.\n\nYou can read more about the tool on the github page.\n\nBest regards,\nHugo\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5718532.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 31 Jul 2012 22:33:04 -0700 (PDT)",
"msg_from": "\"Hugo <Nabble>\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n> >> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock\n> >> management in the server. What I fixed so far on the pg_dump side\n> >> should be enough to let partial dumps run at reasonable speed even if\n> >> the whole database contains many tables. But if psql is taking\n> >> AccessShareLock on lots of tables, there's still a problem.\n> > \n> > Ok, I modified the part of pg_dump where tremendous number of LOCK\n> > TABLE are issued. I replace them with single LOCK TABLE with multiple\n> > tables. With 100k tables LOCK statements took 13 minutes in total, now\n> > it only takes 3 seconds. Comments?\n> \n> Shall I commit to master and all supported branches?\n\nWas this applied?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Thu, 30 Aug 2012 16:44:33 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>> it only takes 3 seconds. Comments?\n\n>> Shall I commit to master and all supported branches?\n\n> Was this applied?\n\nNo, we fixed the server side instead.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 30 Aug 2012 16:51:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 04:51:56PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n> >>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n> >>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n> >>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n> >>> it only takes 3 seconds. Comments?\n> \n> >> Shall I commit to master and all supported branches?\n> \n> > Was this applied?\n> \n> No, we fixed the server side instead.\n\nAgain, thanks. I knew we fixed the server, but wasn't clear that made\nthe client changes unnecessary, but I think I now do remember discussion\nabout that.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Thu, 30 Aug 2012 16:53:39 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 4:51 PM, Tom Lane <[email protected]> wrote:\n> Bruce Momjian <[email protected]> writes:\n>> On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>>> it only takes 3 seconds. Comments?\n>\n>>> Shall I commit to master and all supported branches?\n>\n>> Was this applied?\n>\n> No, we fixed the server side instead.\n\nBut only for 9.2, right? So people running back branches are still screwed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 30 Aug 2012 17:45:24 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Aug 30, 2012 at 4:51 PM, Tom Lane <[email protected]> wrote:\n>> Bruce Momjian <[email protected]> writes:\n>>> On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>>> it only takes 3 seconds. Comments?\n\n>>> Was this applied?\n\n>> No, we fixed the server side instead.\n\n> But only for 9.2, right? So people running back branches are still screwed.\n\nYeah, but they're screwed anyway, because there are a bunch of O(N^2)\nbehaviors involved here, not all of which are masked by what Tatsuo-san\nsuggested.\n\nSix months or a year from now, we might have enough confidence in that\nbatch of 9.2 fixes to back-port them en masse. Don't want to do it\ntoday though.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 30 Aug 2012 23:17:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 8:17 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Aug 30, 2012 at 4:51 PM, Tom Lane <[email protected]> wrote:\n>>> Bruce Momjian <[email protected]> writes:\n>>>> On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n>>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>>>> it only takes 3 seconds. Comments?\n>\n>>>> Was this applied?\n>\n>>> No, we fixed the server side instead.\n>\n>> But only for 9.2, right? So people running back branches are still screwed.\n>\n> Yeah, but they're screwed anyway, because there are a bunch of O(N^2)\n> behaviors involved here, not all of which are masked by what Tatsuo-san\n> suggested.\n\nAll of the other ones that I know of were associated with pg_dump\nitself, and since it is recommended to run the newer version of\npg_dump against the older version of the server, no back patching\nwould be necessary to get the benefits of those particular fixes.\n\n> Six months or a year from now, we might have enough confidence in that\n> batch of 9.2 fixes to back-port them en masse. Don't want to do it\n> today though.\n\n\nWhat would be the recommendation for people trying to upgrade, but who\ncan't get their data out in a reasonable window?\n\nPutting Tatsuo-san's change into a future pg_dump might be more\nconservative than back-porting the server's Lock Table change to the\nserver version they are trying to get rid of.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Sun, 2 Sep 2012 14:39:27 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Sun, Sep 2, 2012 at 5:39 PM, Jeff Janes <[email protected]> wrote:\n> On Thu, Aug 30, 2012 at 8:17 PM, Tom Lane <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> On Thu, Aug 30, 2012 at 4:51 PM, Tom Lane <[email protected]> wrote:\n>>>> Bruce Momjian <[email protected]> writes:\n>>>>> On Thu, May 31, 2012 at 09:20:43AM +0900, Tatsuo Ishii wrote:\n>>>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK\n>>>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple\n>>>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now\n>>>>>> it only takes 3 seconds. Comments?\n>>\n>>>>> Was this applied?\n>>\n>>>> No, we fixed the server side instead.\n>>\n>>> But only for 9.2, right? So people running back branches are still screwed.\n>>\n>> Yeah, but they're screwed anyway, because there are a bunch of O(N^2)\n>> behaviors involved here, not all of which are masked by what Tatsuo-san\n>> suggested.\n>\n> All of the other ones that I know of were associated with pg_dump\n> itself, and since it is recommended to run the newer version of\n> pg_dump against the older version of the server, no back patching\n> would be necessary to get the benefits of those particular fixes.\n>\n>> Six months or a year from now, we might have enough confidence in that\n>> batch of 9.2 fixes to back-port them en masse. Don't want to do it\n>> today though.\n>\n>\n> What would be the recommendation for people trying to upgrade, but who\n> can't get their data out in a reasonable window?\n>\n> Putting Tatsuo-san's change into a future pg_dump might be more\n> conservative than back-porting the server's Lock Table change to the\n> server version they are trying to get rid of.\n\nWhat he said.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 3 Sep 2012 00:37:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] pg_dump and thousands of schemas"
},
{
"msg_contents": "I've read all the posts in thread, and as I understood in version 9.2 some\npatches were applied to improve pg_dump speed. I've just installed\nPostgreSQL 9.2.1 and I still have the same problem. I have a database with\n2600 schemas in it. I try to dump each schema individually, but it takes too\nmuch time for every schema (about 30-40 seconds per schema, no matter what\nthe data size is). Also for each schema dump I have a slow query log entry,\nhere is an example:\n\n>2012-11-06 13:15:32 GMTLOG: duration: 12029.334 ms statement: SELECT\nc.tableoid, c.oid, c.relname, c.relacl, c.relkind, c.relnamespace, (SELECT\nrolname FROM pg_catalog.pg_roles WHERE oid = c.relowner) AS rolname,\nc.relchecks, c.relhastriggers, c.relhasindex, c.relhasrules, c.relhasoids,\nc.relfrozenxid, tc.oid AS toid, tc.relfrozenxid AS tfrozenxid,\nc.relpersistence, CASE WHEN c.reloftype <> 0 THEN\nc.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, d.refobjid AS\nowning_tab, d.refobjsubid AS owning_col, (SELECT spcname FROM pg_tablespace\nt WHERE t.oid = c.reltablespace) AS reltablespace,\narray_to_string(c.reloptions, ', ') AS reloptions,\narray_to_string(array(SELECT 'toast.' || x FROM unnest(tc.reloptions) x), ',\n') AS toast_reloptions FROM pg_class c LEFT JOIN pg_depend d ON (c.relkind =\n'S' AND d.classid = c.tableoid AND d.objid = c.oid AND d.objsubid = 0 AND\nd.refclassid = c.tableoid AND d.deptype = 'a') LEFT JOIN pg_class tc ON\n(c.reltoastrelid = tc.oid) WHERE c.relkind in ('r', 'S', 'v', 'c', 'f')\nORDER BY c.oid\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5730864.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Tue, 6 Nov 2012 06:16:14 -0800 (PST)",
"msg_from": "Denis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Denis <[email protected]> writes:\n> I've read all the posts in thread, and as I understood in version 9.2 some\n> patches were applied to improve pg_dump speed. I've just installed\n> PostgreSQL 9.2.1 and I still have the same problem. I have a database with\n> 2600 schemas in it. I try to dump each schema individually, but it takes too\n> much time for every schema (about 30-40 seconds per schema, no matter what\n> the data size is).\n\nCould you provide a test case for that? Maybe the output of pg_dump -s,\nanonymized as you see fit?\n\n> Also for each schema dump I have a slow query log entry,\n\nCould you provide EXPLAIN ANALYZE output for that query?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 06 Nov 2012 10:07:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Tom Lane-2 wrote\n> Denis <\n\n> socsam@\n\n> > writes:\n>> I've read all the posts in thread, and as I understood in version 9.2\n>> some\n>> patches were applied to improve pg_dump speed. I've just installed\n>> PostgreSQL 9.2.1 and I still have the same problem. I have a database\n>> with\n>> 2600 schemas in it. I try to dump each schema individually, but it takes\n>> too\n>> much time for every schema (about 30-40 seconds per schema, no matter\n>> what\n>> the data size is).\n> \n> Could you provide a test case for that? Maybe the output of pg_dump -s,\n> anonymized as you see fit?\n> \n>> Also for each schema dump I have a slow query log entry,\n> \n> Could you provide EXPLAIN ANALYZE output for that query?\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list (\n\n> pgsql-performance@\n\n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nHere is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it\ntakes from 10 to 15 seconds when I am doing backup.\n\nSort (cost=853562.04..854020.73 rows=183478 width=219) (actual\ntime=5340.477..5405.604 rows=183924 loops=1)\n Sort Key: c.oid\n Sort Method: external merge Disk: 33048kB\n -> Hash Left Join (cost=59259.80..798636.25 rows=183478 width=219)\n(actual time=839.297..4971.299 rows=183924 loops=1)\n Hash Cond: (c.reltoastrelid = tc.oid)\n -> Hash Right Join (cost=29530.77..146976.65 rows=183478\nwidth=183) (actual time=404.959..3261.462 rows=183924 loops=1\n)\n Hash Cond: ((d.classid = c.tableoid) AND (d.objid = c.oid)\nAND (d.refclassid = c.tableoid))\n Join Filter: (c.relkind = 'S'::\"char\")\n -> Seq Scan on pg_depend d (cost=0.00..71403.54 rows=995806\nwidth=20) (actual time=1.137..878.571 rows=998642 lo\nops=1)\n Filter: ((objsubid = 0) AND (deptype = 'a'::\"char\"))\n Rows Removed by Filter: 2196665\n -> Hash (cost=21839.91..21839.91 rows=183478 width=175)\n(actual time=402.947..402.947 rows=183924 loops=1)\n Buckets: 1024 Batches: 32 Memory Usage: 876kB\n -> Seq Scan on pg_class c (cost=0.00..21839.91\nrows=183478 width=175) (actual time=0.017..267.614 rows=183\n924 loops=1)\n Filter: (relkind = ANY ('{r,S,v,c,f}'::\"char\"[]))\n Rows Removed by Filter: 383565\n -> Hash (cost=18333.79..18333.79 rows=560979 width=40) (actual\ntime=434.258..434.258 rows=567489 loops=1)\n Buckets: 4096 Batches: 32 Memory Usage: 703kB\n -> Seq Scan on pg_class tc (cost=0.00..18333.79 rows=560979\nwidth=40) (actual time=0.003..273.418 rows=567489 lo\nops=1)\n SubPlan 1\n -> Seq Scan on pg_authid (cost=0.00..1.01 rows=1 width=68)\n(actual time=0.001..0.001 rows=1 loops=183924)\n Filter: (oid = c.relowner)\n Rows Removed by Filter: 2\n SubPlan 2\n -> Seq Scan on pg_tablespace t (cost=0.00..1.02 rows=1\nwidth=64) (actual time=0.001..0.001 rows=0 loops=183924)\n Filter: (oid = c.reltablespace)\n Rows Removed by Filter: 2\n SubPlan 3\n -> Function Scan on unnest x (cost=0.00..1.25 rows=100\nwidth=32) (actual time=0.001..0.001 rows=0 loops=183924)\n Total runtime: 5428.498 ms\n\nHere is the output of \"pg_dump -s\" test.dump\n<http://postgresql.1045698.n5.nabble.com/file/n5730877/test.dump> \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5730877.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Tue, 6 Nov 2012 07:40:21 -0800 (PST)",
"msg_from": "Denis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Denis <[email protected]> writes:\n> Here is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it\n> takes from 10 to 15 seconds when I am doing backup.\n\n> Sort (cost=853562.04..854020.73 rows=183478 width=219) (actual\n> time=5340.477..5405.604 rows=183924 loops=1)\n\nHmmm ... so the problem here isn't that you've got 2600 schemas, it's\nthat you've got 183924 tables. That's going to take some time no matter\nwhat.\n\nIt does seem like we could make some small changes to optimize that\nquery a little bit, but they're not going to result in any amazing\nimprovement overall, because pg_dump still has to deal with all the\ntables it's getting back. Fundamentally, I would ask whether you really\nneed so many tables. It seems pretty likely that you have lots and lots\nof basically-identical tables. Usually it would be better to redesign\nsuch a structure into fewer tables with more index columns.\n\n> Here is the output of \"pg_dump -s\" test.dump\n> <http://postgresql.1045698.n5.nabble.com/file/n5730877/test.dump> \n\nThis dump contains only 1 schema and 43 tables, so I don't think it's\nfor the database you're having trouble with ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 06 Nov 2012 12:51:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Tom Lane-2 wrote\n> Denis <\n\n> socsam@\n\n> > writes:\n>> Here is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it\n>> takes from 10 to 15 seconds when I am doing backup.\n> \n>> Sort (cost=853562.04..854020.73 rows=183478 width=219) (actual\n>> time=5340.477..5405.604 rows=183924 loops=1)\n> \n> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's\n> that you've got 183924 tables. That's going to take some time no matter\n> what.\n> \n> It does seem like we could make some small changes to optimize that\n> query a little bit, but they're not going to result in any amazing\n> improvement overall, because pg_dump still has to deal with all the\n> tables it's getting back. Fundamentally, I would ask whether you really\n> need so many tables. It seems pretty likely that you have lots and lots\n> of basically-identical tables. Usually it would be better to redesign\n> such a structure into fewer tables with more index columns.\n> \n>> Here is the output of \"pg_dump -s\" test.dump\n>> <http://postgresql.1045698.n5.nabble.com/file/n5730877/test.dump> \n> \n> This dump contains only 1 schema and 43 tables, so I don't think it's\n> for the database you're having trouble with ...\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list (\n\n> pgsql-performance@\n\n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nI wonder why pg_dump has to have deal with all these 183924 tables, if I\nspecified to dump only one scheme: \"pg_dump -n schema_name\" or even like\nthis to dump just one table \"pg_dump -t 'schema_name.comments' \" ?\n\nWe have a web application where we create a schema with a number of tables\nin it for each customer. This architecture was chosen to ease the process of\nbackup/restoring data. Sometimes clients ask us to restore data for the last\nmonth or roll back to last week's state. This task is easy to accomplish\nthen the client's data is isolated in a schema/DB. If we put all the clients\ndata in one table - operations of this kind will be much harder to perform.\nWe will have to restore a huge DB with an enormously large tables in it to\nfind the requested data. \nDifferent clients have different activity rate and we can select different\nbackup strategies according to it. This would be impossible in case we keep\nall the clients data in one table. \nBesides all the above mentioned, the probability of massive data corruption\n(if an error in our web application occurs) is much higher.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5730998.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 7 Nov 2012 02:42:52 -0800 (PST)",
"msg_from": "Denis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Denis <[email protected]> writes:\n> Tom Lane-2 wrote\n>> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's\n>> that you've got 183924 tables. That's going to take some time no matter\n>> what.\n\n> I wonder why pg_dump has to have deal with all these 183924 tables, if I\n> specified to dump only one scheme: \"pg_dump -n schema_name\" or even like\n> this to dump just one table \"pg_dump -t 'schema_name.comments' \" ?\n\nIt has to know about all the tables even if it's not going to dump them\nall, for purposes such as dependency analysis.\n\n> We have a web application where we create a schema with a number of tables\n> in it for each customer. This architecture was chosen to ease the process of\n> backup/restoring data.\n\nI find that argument fairly dubious, but in any case you should not\nimagine that hundreds of thousands of tables are going to be cost-free.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 07 Nov 2012 10:02:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Tom Lane-2 wrote\n> Denis <\n\n> socsam@\n\n> > writes:\n>> Tom Lane-2 wrote\n>>> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's\n>>> that you've got 183924 tables. That's going to take some time no matter\n>>> what.\n> \n>> I wonder why pg_dump has to have deal with all these 183924 tables, if I\n>> specified to dump only one scheme: \"pg_dump -n schema_name\" or even like\n>> this to dump just one table \"pg_dump -t 'schema_name.comments' \" ?\n> \n> It has to know about all the tables even if it's not going to dump them\n> all, for purposes such as dependency analysis.\n> \n>> We have a web application where we create a schema with a number of\n>> tables\n>> in it for each customer. This architecture was chosen to ease the process\n>> of\n>> backup/restoring data.\n> \n> I find that argument fairly dubious, but in any case you should not\n> imagine that hundreds of thousands of tables are going to be cost-free.\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list (\n\n> pgsql-performance@\n\n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nStill I can't undesrtand why pg_dump has to know about all the tables? For\nexample I have such an easy table \nCREATE TABLE \"CLog\" (\n \"fromUser\" integer,\n \"toUser\" integer,\n message character varying(2048) NOT NULL,\n \"dateSend\" timestamp without time zone NOT NULL\n);\nno foreign keys, it doesn't use partitioning, it doesn't have any relations\nto any other table. Why pg_dump has to gother information about ALL the\ntables in the database just to dump one this table?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5731188.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Thu, 8 Nov 2012 01:04:34 -0800 (PST)",
"msg_from": "Denis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Thu, Nov 8, 2012 at 1:04 AM, Denis <[email protected]> wrote:\n>\n> Still I can't undesrtand why pg_dump has to know about all the tables?\n\nStrictly speaking it probably doesn't need to. But it is primarily\ndesigned for dumping entire databases, and the efficient way to do\nthat is to read it all into memory in a few queries and then sort out\nthe dependencies, rather than tracking down every dependency\nindividually with one or more trips back to the database. (Although\nit still does make plenty of trips back to the database per\ntable/sequence, for acls, defaults, attributes.\n\nIf you were to rewrite pg_dump from the ground up to achieve your\nspecific needs (dumping one schema, with no dependencies between to\nother schemata) you could probably make it much more efficient. But\nthen it wouldn't be pg_dump, it would be something else.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 9 Nov 2012 12:47:37 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "Jeff Janes wrote\n> On Thu, Nov 8, 2012 at 1:04 AM, Denis <\n\n> socsam@\n\n> > wrote:\n>>\n>> Still I can't undesrtand why pg_dump has to know about all the tables?\n> \n> Strictly speaking it probably doesn't need to. But it is primarily\n> designed for dumping entire databases, and the efficient way to do\n> that is to read it all into memory in a few queries and then sort out\n> the dependencies, rather than tracking down every dependency\n> individually with one or more trips back to the database. (Although\n> it still does make plenty of trips back to the database per\n> table/sequence, for acls, defaults, attributes.\n> \n> If you were to rewrite pg_dump from the ground up to achieve your\n> specific needs (dumping one schema, with no dependencies between to\n> other schemata) you could probably make it much more efficient. But\n> then it wouldn't be pg_dump, it would be something else.\n> \n> Cheers,\n> \n> Jeff\n> \n> \n> -- \n> Sent via pgsql-performance mailing list (\n\n> pgsql-performance@\n\n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nPlease don't think that I'm trying to nitpick here, but pg_dump has options\nfor dumping separate tables and that's not really consistent with the idea\nthat \"pg_dump is primarily designed for dumping entire databases\".\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5731900.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Tue, 13 Nov 2012 19:12:39 -0800 (PST)",
"msg_from": "Denis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "On Tue, Nov 13, 2012 at 7:12 PM, Denis <[email protected]> wrote:\n> Jeff Janes wrote\n>> On Thu, Nov 8, 2012 at 1:04 AM, Denis <\n>\n>> socsam@\n>\n>> > wrote:\n>>>\n>>> Still I can't undesrtand why pg_dump has to know about all the tables?\n>>\n>> Strictly speaking it probably doesn't need to. But it is primarily\n>> designed for dumping entire databases, and the efficient way to do\n>> that is to read it all into memory in a few queries and then sort out\n>> the dependencies, rather than tracking down every dependency\n>> individually with one or more trips back to the database. (Although\n>> it still does make plenty of trips back to the database per\n>> table/sequence, for acls, defaults, attributes.\n>>\n>> If you were to rewrite pg_dump from the ground up to achieve your\n>> specific needs (dumping one schema, with no dependencies between to\n>> other schemata) you could probably make it much more efficient. But\n>> then it wouldn't be pg_dump, it would be something else.\n>>\n>\n> Please don't think that I'm trying to nitpick here, but pg_dump has options\n> for dumping separate tables and that's not really consistent with the idea\n> that \"pg_dump is primarily designed for dumping entire databases\".\n\nI think it is compatible. From my reading of pg_dump, those other\noptions seem to have been bolted on as an afterthought, not as part of\nits primary design.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Tue, 13 Nov 2012 19:40:57 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
},
{
"msg_contents": "\nOn 11/13/2012 10:12 PM, Denis wrote:\n> Please don't think that I'm trying to nitpick here, but pg_dump has options\n> for dumping separate tables and that's not really consistent with the idea\n> that \"pg_dump is primarily designed for dumping entire databases\".\n>\n>\n\n\nSure it is. The word \"primarily\" is not just a noise word here.\n\nThe fact that we have options to do other things doesn't mean that its \nprimary design goal has changed.\n\n\ncheers\n\nandrew\n\n",
"msg_date": "Tue, 13 Nov 2012 22:56:16 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] pg_dump and thousands of schemas"
}
] |
[
{
"msg_contents": "\nDear Andy ,\n\nFollowing the discussion on load average we are now investigating on some \nother parts of the stack (other than db). \n\nEssentially we are bumping up the limits (on appserver) so that more requests \ngoes to the DB server.\n\n\n| \n| Maybe you are hitting some locks? If its not IO and not CPU then\n| maybe something is getting locked and queries are piling up.\n\n\n\n\n| \n| -Andy\n",
"msg_date": "Thu, 24 May 2012 18:28:07 +0530 (IST)",
"msg_from": "\"Rajesh Kumar. Mallah\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": "On 05/24/2012 05:58 AM, Rajesh Kumar. Mallah wrote:\n> Dear Andy ,\n>\n> Following the discussion on load average we are now investigating on some\n> other parts of the stack (other than db).\n>\n> Essentially we are bumping up the limits (on appserver) so that more requests\n> goes to the DB server.\nWhich leads to the question: what, other than the db, runs on this machine?\n\nCheers,\nSteve\n",
"msg_date": "Thu, 24 May 2012 08:53:47 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
},
{
"msg_contents": " \n| From: \"Steve Crawford\" <[email protected]>\n| To: \"Rajesh Kumar. Mallah\" <[email protected]>\n| Cc: \"Andy Colson\" <[email protected]>, \"Claudio Freire\" <[email protected]>, [email protected]\n| Sent: Thursday, May 24, 2012 9:23:47 PM\n| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and CPU is idle\n|\n| On 05/24/2012 05:58 AM, Rajesh Kumar. Mallah wrote:\n| > Dear Andy ,\n| >\n| > Following the discussion on load average we are now investigating\n| on some\n| > other parts of the stack (other than db).\n| >\n| > Essentially we are bumping up the limits (on appserver) so that more\n| requests\n| > goes to the DB server.\n| Which leads to the question: what, other than the db, runs on this\n| machine?\n\nNo nothing else runs on *this* machine. \nWe are lucky to have such a beefy hardware dedicated to postgres :)\nWe have a separate machine for application server that has 2 tiers.\nI am trying to reach to the point to max out the db machine , for that\nto happen we need to work on the other parts.\n\nregds\nmallah.\n\n\n| \n| Cheers,\n| Steve\n",
"msg_date": "Thu, 24 May 2012 22:10:26 +0530 (IST)",
"msg_from": "\"Rajesh Kumar. Mallah\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
}
] |
[
{
"msg_contents": "On Fri, May 25, 2012 at 9:04 AM, Craig James <[email protected]> wrote:\n\n> On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg <[email protected]>\n> wrote:\n>\n>> On Sun, May 13, 2012 at 10:01 AM, Craig James <[email protected]>\n>> wrote:\n>>\n>>>\n>>> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]>\n>>> wrote:\n>>>\n>>>>\n>>>> The sequences AFAIK are accounted as relations. Large list of relations\n>>>> may slowdown different system utilities like vacuuming (or may not, depends\n>>>> on queries and indexes on pg_class).\n>>>>\n>>>\n>>> Not \"may slow down.\" Change that to \"will slow down and possibly\n>>> corrupt\" your system.\n>>>\n>>> In my experience (PG 8.4.x), the system can handle in the neighborhood\n>>> of 100,000 relations pretty well. Somewhere over 1,000,000 relations, the\n>>> system becomes unusable. It's not that it stops working -- day-to-day\n>>> operations such as querying your tables and running your applications\n>>> continue to work. But system operations that have to scan for table\n>>> information seem to freeze (maybe they run out of memory, or are\n>>> encountering an O(N^2) operation and simply cease to complete).\n>>>\n>>\n>> Glad I found this thread.\n>>\n>> Is this 1M relation mark for the whole database cluster or just for a\n>> single database within the cluster?\n>>\n>\n> I don't know. When I discovered this, our system only had a few dozen\n> databases, and I never conducted any experiments. We had to write our own\n> version of pg_dump to get the data out of the damaged system, and then\n> reload from scratch. And it's not a \"hard\" number. Even at a million\n> relation things work ... they just bog down dramatically. By the time I\n> got to 5 million relations (a rogue script was creating 50,000 tables per\n> day and not cleaning up), the system was effectively unusable.\n>\n\nThis is somewhat disturbing. I need to throw out a question I hope for an\nanswer.\nHas anyone ever witnessed similar behavior with a large number of\nrelations? Slow? Corruption?\n\n-Greg\n\nOn Fri, May 25, 2012 at 9:04 AM, Craig James <[email protected]> wrote:\nOn Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg <[email protected]> wrote:\nOn Sun, May 13, 2012 at 10:01 AM, Craig James <[email protected]> wrote:\nOn Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин <[email protected]> wrote:\nThe sequences AFAIK are accounted as relations. Large list of relations may slowdown different system utilities like vacuuming (or may not, depends on queries and indexes on pg_class).\nNot \"may slow down.\" Change that to \"will slow down and possibly corrupt\" your system.In my experience (PG 8.4.x), the system can handle in the neighborhood of 100,000 relations pretty well. Somewhere over 1,000,000 relations, the system becomes unusable. It's not that it stops working -- day-to-day operations such as querying your tables and running your applications continue to work. But system operations that have to scan for table information seem to freeze (maybe they run out of memory, or are encountering an O(N^2) operation and simply cease to complete).\nGlad I found this thread.Is this 1M relation mark for the whole database cluster or just for a single database within the cluster?\nI don't know. When I discovered this, our system only had a few dozen databases, and I never conducted any experiments. We had to write our own version of pg_dump to get the data out of the damaged system, and then reload from scratch. And it's not a \"hard\" number. Even at a million relation things work ... they just bog down dramatically. By the time I got to 5 million relations (a rogue script was creating 50,000 tables per day and not cleaning up), the system was effectively unusable.\nThis is somewhat disturbing. I need to throw out a question I hope for an answer. Has anyone ever witnessed similar behavior with a large number of relations? Slow? Corruption?\n-Greg",
"msg_date": "Fri, 25 May 2012 09:52:22 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Millions of relations (from Maximum number of sequences that can be\n\tcreated)"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm wondering if there is ia document describing which guarantees (if\nany) PostgreSQL makes about concurrency for various operations? Speaking\nin general (i.e. IO can handle it, number of CPU cores and client\nthreads is optimal), are fully concurrent operations (independant and\nnon-blocking) possible for:\n\n1) An unindexed table?\n\n2) A table with 1+ ordinary (default btree) indexes? (are there\nconstraints on the structure and number of indexes?)\n\n3) A table with 1+ unique indexes?\n\n4) A table with other objects on it (foreign keys, check constraints, etc.)?\n\n\n",
"msg_date": "Sat, 26 May 2012 00:04:58 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel (concurrent) inserts?"
},
{
"msg_contents": "\n> I'm wondering if there is ia document describing which guarantees (if\n> any) PostgreSQL makes about concurrency for various operations? Speaking\n> in general (i.e. IO can handle it, number of CPU cores and client\n> threads is optimal), are fully concurrent operations (independant and\n> non-blocking) possible for:\n\nYes, there's quite a bit of documentation. Start here:\nhttp://www.postgresql.org/docs/9.1/static/mvcc.html\n\n> 1) An unindexed table?\n\nYes.\n\n> 2) A table with 1+ ordinary (default btree) indexes? (are there\n> constraints on the structure and number of indexes?)\n\nYes.\n\n> 3) A table with 1+ unique indexes?\n\nYes. Note that locking on the unique index may degrade concurrent\nthroughput of the input stream though, since we have to make sure you're\nnot inserting two different rows with the same unique indexed value\nsimulatenously. It's work as you expect, though.\n\n> 4) A table with other objects on it (foreign keys, check constraints, etc.)?\n\nYes. Also concurrent autonumber (sequence) allocations work fine.\n\nIn fact, PostgreSQL has no non-concurrent mode, unless you count temp\ntables.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 25 May 2012 16:13:54 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel (concurrent) inserts?"
},
{
"msg_contents": "On Fri, May 25, 2012 at 3:04 PM, Ivan Voras <[email protected]> wrote:\n> Hello,\n>\n> I'm wondering if there is ia document describing which guarantees (if\n> any) PostgreSQL makes about concurrency for various operations? Speaking\n> in general (i.e. IO can handle it, number of CPU cores and client\n> threads is optimal), are fully concurrent operations (independant and\n> non-blocking) possible for:\n\nBy \"fully concurrent\" do you mean that there is no detectable\nsub-linear scaling at all? I'm pretty sure that no such guarantees\ncan be made.\n\n> 1) An unindexed table?\n\nFor concurrent bulk loads, there was severe contention on generating\nWAL between concurrent bulk loaders. That is greatly improved in the\nupcoming 9.2 release.\n\n> 2) A table with 1+ ordinary (default btree) indexes? (are there\n> constraints on the structure and number of indexes?)\n\nThis will be limited by contention for generating WAL. (Unless\nsomething else limits it first)\n\n> 3) A table with 1+ unique indexes?\n\nIf more than one transaction attempts to insert the same value, one\nhas to block until the other either commits or rollbacks.\n\n> 4) A table with other objects on it (foreign keys, check constraints, etc.)?\n\nSame as above. If they try to do conflicting things, they don't\ncontinue operating concurrently.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 25 May 2012 16:36:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel (concurrent) inserts?"
},
{
"msg_contents": "On 26 May 2012 01:36, Jeff Janes <[email protected]> wrote:\n> On Fri, May 25, 2012 at 3:04 PM, Ivan Voras <[email protected]> wrote:\n>> Hello,\n>>\n>> I'm wondering if there is ia document describing which guarantees (if\n>> any) PostgreSQL makes about concurrency for various operations? Speaking\n>> in general (i.e. IO can handle it, number of CPU cores and client\n>> threads is optimal), are fully concurrent operations (independant and\n>> non-blocking) possible for:\n>\n> By \"fully concurrent\" do you mean that there is no detectable\n> sub-linear scaling at all? I'm pretty sure that no such guarantees\n> can be made.\n\n> For concurrent bulk loads, there was severe contention on generating\n> WAL between concurrent bulk loaders. That is greatly improved in the\n> upcoming 9.2 release.\n\nI was thinking about major exclusive locks in the code paths which\nwould block multiple clients operating on unrelated data (the same\nquestions go for update operations). For example: if the free space\nmap or whole index trees are exclusively locked, things like that. The\nWAL issue you mention seems exactly like what I was asking about, but\nare there any others?\n",
"msg_date": "Sat, 26 May 2012 01:56:57 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel (concurrent) inserts?"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a SQL function (which I've pasted below) and while testing its\ncode directly (outside a function), this is the \"normal\", default plan:\n\nhttp://explain.depesz.com/s/vfP (67 ms)\n\nand this is the plain with enable_seqscan turned off:\n\nhttp://explain.depesz.com/s/EFP (27 ms)\n\nDisabling seqscan results in almost 2.5x faster execution.\n\nHowever, when this code is wrapped in a function, the execution time is\ncloser to the second case (which is great, I'm not complaining):\n\nedem=> explain analyze select * from document_content_top_voted(36);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Function Scan on document_content_top_voted (cost=0.25..10.25\nrows=1000 width=188) (actual time=20.644..20.821 rows=167 loops=1)\n Total runtime: 21.236 ms\n(2 rows)\n\nI assume that the difference between the function execution time and the\ndirect plan with seqscan disabled is due to SQL parsing and planning.\n\nSince the plan is compiled-in for stored procedures, is the planner in\nthat case already running under the assumption that seqscans must be\ndisabled (or something to that effect)?\n\nWould tweaking enable_seqscan and other planner functions during the\nCREATE FUNCTION have an effect on the stored plan?\n\nDo the functions need to be re-created when the database is fully\npopulated, to adjust their stored plans with regards to new selectivity\nsituation on the indexes?\n\n----\n\nThe SQL function is:\n\n-- Retrieves document chunks of a specified document which have the most\nvotes\n\nDROP FUNCTION IF EXISTS document_content_top_voted(INTEGER);\nCREATE OR REPLACE FUNCTION document_content_top_voted(document_id INTEGER)\n RETURNS TABLE\n (chunk_id INTEGER, seq INTEGER, content TEXT, ctime INTEGER, log\nTEXT,\n nr_chunk_upvotes INTEGER, nr_chunk_downvotes INTEGER,\nnr_seq_changes INTEGER, nr_seq_comments INTEGER,\n user_login VARCHAR, user_public_name VARCHAR, user_email VARCHAR)\nAS $$\n WITH documents_top_chunks AS (\n SELECT\n (SELECT\n chunk_id\n FROM\n documents_chunks_votes_total\n WHERE\n documents_id=$1 AND\ndocuments_chunks_votes_total.seq=documents_seqs.seq AND votes=\n (SELECT\n max(votes)\n FROM\n documents_chunks_votes_total\n WHERE\n documents_id=$1 AND\ndocuments_chunks_votes_total.seq=documents_seqs.seq)\n ORDER BY\n chunk_id DESC\n LIMIT 1) AS chunk_id, seq AS doc_seq\n FROM\n documents_seqs\n WHERE\n documents_id = $1\n ORDER BY seq\n ) SELECT\n chunk_id, doc_seq, content, documents_chunks.ctime,\ndocuments_chunks.log,\n COALESCE((SELECT SUM(vote) FROM documents_chunks_votes WHERE\ndocuments_chunks_id=chunk_id AND vote=1)::integer, 0) AS nr_chunk_upvotes,\n COALESCE((SELECT SUM(vote) FROM documents_chunks_votes WHERE\ndocuments_chunks_id=chunk_id AND vote=-1)::integer, 0) AS\nnr_chunk_downvotes,\n (SELECT COUNT(*) FROM documents_chunks WHERE documents_id=$1 AND\nseq=doc_seq)::integer AS nr_seq_changes,\n (SELECT COUNT(*) FROM documents_seq_comments WHERE\ndocuments_seq_comments.documents_id=$1 AND seq=doc_seq)::integer AS\nnr_seq_comments,\n users.login, users.public_name, users.email\n FROM\n documents_chunks\n JOIN documents_top_chunks ON documents_chunks.id =\ndocuments_top_chunks.chunk_id\n JOIN users ON users.id=creator_uid\n ORDER BY doc_seq\n$$ LANGUAGE SQL;\n\n(comments on improving the efficiency of the SQL code are also appreciated)\n\n\n",
"msg_date": "Sat, 26 May 2012 23:38:50 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seqscan slowness and stored procedures"
},
{
"msg_contents": "Hello\n\n2012/5/26 Ivan Voras <[email protected]>:\n> Hello,\n>\n> I have a SQL function (which I've pasted below) and while testing its\n> code directly (outside a function), this is the \"normal\", default plan:\n>\n> http://explain.depesz.com/s/vfP (67 ms)\n>\n> and this is the plain with enable_seqscan turned off:\n>\n> http://explain.depesz.com/s/EFP (27 ms)\n>\n> Disabling seqscan results in almost 2.5x faster execution.\n>\n> However, when this code is wrapped in a function, the execution time is\n> closer to the second case (which is great, I'm not complaining):\n>\n\nsee http://archives.postgresql.org/pgsql-general/2009-12/msg01189.php\n\nRegards\n\nPavel\n\n> edem=> explain analyze select * from document_content_top_voted(36);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Function Scan on document_content_top_voted (cost=0.25..10.25\n> rows=1000 width=188) (actual time=20.644..20.821 rows=167 loops=1)\n> Total runtime: 21.236 ms\n> (2 rows)\n>\n> I assume that the difference between the function execution time and the\n> direct plan with seqscan disabled is due to SQL parsing and planning.\n>\n> Since the plan is compiled-in for stored procedures, is the planner in\n> that case already running under the assumption that seqscans must be\n> disabled (or something to that effect)?\n>\n> Would tweaking enable_seqscan and other planner functions during the\n> CREATE FUNCTION have an effect on the stored plan?\n>\n> Do the functions need to be re-created when the database is fully\n> populated, to adjust their stored plans with regards to new selectivity\n> situation on the indexes?\n>\n> ----\n>\n> The SQL function is:\n>\n> -- Retrieves document chunks of a specified document which have the most\n> votes\n>\n> DROP FUNCTION IF EXISTS document_content_top_voted(INTEGER);\n> CREATE OR REPLACE FUNCTION document_content_top_voted(document_id INTEGER)\n> RETURNS TABLE\n> (chunk_id INTEGER, seq INTEGER, content TEXT, ctime INTEGER, log\n> TEXT,\n> nr_chunk_upvotes INTEGER, nr_chunk_downvotes INTEGER,\n> nr_seq_changes INTEGER, nr_seq_comments INTEGER,\n> user_login VARCHAR, user_public_name VARCHAR, user_email VARCHAR)\n> AS $$\n> WITH documents_top_chunks AS (\n> SELECT\n> (SELECT\n> chunk_id\n> FROM\n> documents_chunks_votes_total\n> WHERE\n> documents_id=$1 AND\n> documents_chunks_votes_total.seq=documents_seqs.seq AND votes=\n> (SELECT\n> max(votes)\n> FROM\n> documents_chunks_votes_total\n> WHERE\n> documents_id=$1 AND\n> documents_chunks_votes_total.seq=documents_seqs.seq)\n> ORDER BY\n> chunk_id DESC\n> LIMIT 1) AS chunk_id, seq AS doc_seq\n> FROM\n> documents_seqs\n> WHERE\n> documents_id = $1\n> ORDER BY seq\n> ) SELECT\n> chunk_id, doc_seq, content, documents_chunks.ctime,\n> documents_chunks.log,\n> COALESCE((SELECT SUM(vote) FROM documents_chunks_votes WHERE\n> documents_chunks_id=chunk_id AND vote=1)::integer, 0) AS nr_chunk_upvotes,\n> COALESCE((SELECT SUM(vote) FROM documents_chunks_votes WHERE\n> documents_chunks_id=chunk_id AND vote=-1)::integer, 0) AS\n> nr_chunk_downvotes,\n> (SELECT COUNT(*) FROM documents_chunks WHERE documents_id=$1 AND\n> seq=doc_seq)::integer AS nr_seq_changes,\n> (SELECT COUNT(*) FROM documents_seq_comments WHERE\n> documents_seq_comments.documents_id=$1 AND seq=doc_seq)::integer AS\n> nr_seq_comments,\n> users.login, users.public_name, users.email\n> FROM\n> documents_chunks\n> JOIN documents_top_chunks ON documents_chunks.id =\n> documents_top_chunks.chunk_id\n> JOIN users ON users.id=creator_uid\n> ORDER BY doc_seq\n> $$ LANGUAGE SQL;\n>\n> (comments on improving the efficiency of the SQL code are also appreciated)\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 May 2012 05:28:32 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan slowness and stored procedures"
},
{
"msg_contents": "On 27 May 2012 05:28, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> 2012/5/26 Ivan Voras <[email protected]>:\n>> Hello,\n>>\n>> I have a SQL function (which I've pasted below) and while testing its\n>> code directly (outside a function), this is the \"normal\", default plan:\n>>\n>> http://explain.depesz.com/s/vfP (67 ms)\n>>\n>> and this is the plain with enable_seqscan turned off:\n>>\n>> http://explain.depesz.com/s/EFP (27 ms)\n>>\n>> Disabling seqscan results in almost 2.5x faster execution.\n>>\n>> However, when this code is wrapped in a function, the execution time is\n>> closer to the second case (which is great, I'm not complaining):\n>>\n>\n> see http://archives.postgresql.org/pgsql-general/2009-12/msg01189.php\n\nHi,\n\nThank you for your answer, but if you read my post, you'll hopefully\nrealize my questions are different from that in the linked post, and\nare not answered by the post.\n",
"msg_date": "Sun, 27 May 2012 17:57:55 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seqscan slowness and stored procedures"
},
{
"msg_contents": "2012/5/27 Ivan Voras <[email protected]>:\n> On 27 May 2012 05:28, Pavel Stehule <[email protected]> wrote:\n>> Hello\n>>\n>> 2012/5/26 Ivan Voras <[email protected]>:\n>>> Hello,\n>>>\n>>> I have a SQL function (which I've pasted below) and while testing its\n>>> code directly (outside a function), this is the \"normal\", default plan:\n>>>\n>>> http://explain.depesz.com/s/vfP (67 ms)\n>>>\n>>> and this is the plain with enable_seqscan turned off:\n>>>\n>>> http://explain.depesz.com/s/EFP (27 ms)\n>>>\n>>> Disabling seqscan results in almost 2.5x faster execution.\n>>>\n>>> However, when this code is wrapped in a function, the execution time is\n>>> closer to the second case (which is great, I'm not complaining):\n>>>\n>>\n>> see http://archives.postgresql.org/pgsql-general/2009-12/msg01189.php\n>\n> Hi,\n>\n> Thank you for your answer, but if you read my post, you'll hopefully\n> realize my questions are different from that in the linked post, and\n> are not answered by the post.\n\nyes, sorry,\n\nPavel\n",
"msg_date": "Sun, 27 May 2012 18:07:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan slowness and stored procedures"
},
{
"msg_contents": "Ivan Voras wrote:\n> I have a SQL function (which I've pasted below) and while testing its\n> code directly (outside a function), this is the \"normal\", default\nplan:\n> \n> http://explain.depesz.com/s/vfP (67 ms)\n> \n> and this is the plain with enable_seqscan turned off:\n> \n> http://explain.depesz.com/s/EFP (27 ms)\n> \n> Disabling seqscan results in almost 2.5x faster execution.\n> \n> However, when this code is wrapped in a function, the execution time\nis\n> closer to the second case (which is great, I'm not complaining):\n> \n> edem=> explain analyze select * from document_content_top_voted(36);\n> QUERY PLAN\n>\n------------------------------------------------------------------------\n------------------------------\n> -----------------------------\n> Function Scan on document_content_top_voted (cost=0.25..10.25\n> rows=1000 width=188) (actual time=20.644..20.821 rows=167 loops=1)\n> Total runtime: 21.236 ms\n> (2 rows)\n> \n> I assume that the difference between the function execution time and\nthe\n> direct plan with seqscan disabled is due to SQL parsing and planning.\n\nThat cannot be, because SQL functions do not cache execution plans.\n\nDid you take caching of table data in the buffer cache or the filesystem\ncache into account? Did you run your tests several times in a row and\nwere the actual execution times consistent?\n\n> Since the plan is compiled-in for stored procedures, is the planner in\n> that case already running under the assumption that seqscans must be\n> disabled (or something to that effect)?\n> \n> Would tweaking enable_seqscan and other planner functions during the\n> CREATE FUNCTION have an effect on the stored plan?\n\nNo, but you can use the SET clause of CREATE FUNCTION to change\nenable_seqscan for this function if you know that this is the right\nthing.\nBut be aware that things might be different for other function arguments\nor when the table data change, so this is normally considered a bad\nidea.\n\n> Do the functions need to be re-created when the database is fully\n> populated, to adjust their stored plans with regards to new\nselectivity\n> situation on the indexes?\n\nNo. Even in PL/pgSQL, where plans are cached, this is only for the\nlifetime of the database session. The plan is generated when the\nfunction is called for the first time in a database session.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 8 Jun 2012 11:58:30 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan slowness and stored procedures"
},
{
"msg_contents": "On 8 June 2012 11:58, Albe Laurenz <[email protected]> wrote:\n\n> Did you take caching of table data in the buffer cache or the filesystem\n> cache into account? Did you run your tests several times in a row and\n> were the actual execution times consistent?\n\nYes, and yes.\n\n>> Would tweaking enable_seqscan and other planner functions during the\n>> CREATE FUNCTION have an effect on the stored plan?\n>\n> No, but you can use the SET clause of CREATE FUNCTION to change\n> enable_seqscan for this function if you know that this is the right\n> thing.\n> But be aware that things might be different for other function arguments\n> or when the table data change, so this is normally considered a bad\n> idea.\n\nOk.\n\n>> Do the functions need to be re-created when the database is fully\n>> populated, to adjust their stored plans with regards to new\n> selectivity\n>> situation on the indexes?\n>\n> No. Even in PL/pgSQL, where plans are cached, this is only for the\n> lifetime of the database session. The plan is generated when the\n> function is called for the first time in a database session.\n\nThanks for clearing this up for me! I thought SQL functions are also\npre-planned and that the plans are static.\n",
"msg_date": "Fri, 8 Jun 2012 13:00:11 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seqscan slowness and stored procedures"
}
] |
[
{
"msg_contents": "Hi,\n\n¿How I can recover a row delete of a table that wasn't vacuummed?\nI have PostgreSQL 9.1 in Windows XP/7.\n\n\nThanks\nHi,¿How I can recover a row delete of a table that wasn't vacuummed?I have PostgreSQL 9.1 in Windows XP/7.Thanks",
"msg_date": "Mon, 28 May 2012 19:24:13 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recover rows deleted"
},
{
"msg_contents": "On Mon, 2012-05-28 at 19:24 +0100, Alejandro Carrillo wrote:\n> Hi,\n> \n> \n> �How I can recover a row delete of a table that wasn't vacuummed?\n> I have PostgreSQL 9.1 in Windows XP/7.\n\nThe first thing to do is shut down postgresql and take a full backup of\nthe data directory, including any archived WAL you might have (files in\npg_xlog). Make sure this is done first.\n\nNext, do you have any backups? If you have a base backup from before the\ndelete, and all the WAL files from the time of the base backup until\nnow, then you can try point-in-time recovery to the point right before\nthe data loss:\n\nhttp://www.postgresql.org/docs/9.1/static/continuous-archiving.html\n\nIf not, are we talking about a single row, or many rows? If it's a\nsingle row you might be able to do some manual steps, like examining the\npages to recover the data.\n\nAnother option is to try pg_resetxlog (make sure you have a safe backup\nfirst!):\n\nhttp://www.postgresql.org/docs/9.1/static/app-pgresetxlog.html\n\nAnd try setting the current transaction ID to just before the delete\nran. Then you may be able to use pg_dump or otherwise export the deleted\nrows.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 29 May 2012 13:53:20 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "On Mon, May 28, 2012 at 07:24:13PM +0100, Alejandro Carrillo wrote:\n> ¿How I can recover a row delete of a table that wasn't vacuummed?\n> I have PostgreSQL 9.1 in Windows XP/7.\n\nhttp://www.depesz.com/2012/04/04/lets-talk-dirty/\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n",
"msg_date": "Tue, 29 May 2012 23:33:20 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "Hi friend,\n\nYour function doesn't compile in Windows.\nPlease change it.\n\nThanks\n\n\n\n\n>________________________________\n> De: hubert depesz lubaczewski <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]> \n>Enviado: Martes 29 de Mayo de 2012 16:33\n>Asunto: Re: [PERFORM] Recover rows deleted\n> \n>On Mon, May 28, 2012 at 07:24:13PM +0100, Alejandro Carrillo wrote:\n>> ¿How I can recover a row delete of a table that wasn't vacuummed?\n>> I have PostgreSQL 9.1 in Windows XP/7.\n>\n>http://www.depesz.com/2012/04/04/lets-talk-dirty/\n>\n>Best regards,\n>\n>depesz\n>\n>-- \n>The best thing about modern society is how easy it is to avoid contact with it.\n> http://depesz.com/\n>\n>\n>\nHi friend,Your function doesn't compile in Windows.Please change it.Thanks De: hubert depesz lubaczewski <[email protected]> Para: Alejandro Carrillo <[email protected]> CC:\n \"[email protected]\" <[email protected]> Enviado: Martes 29 de Mayo de 2012 16:33 Asunto: Re: [PERFORM] Recover rows deleted On Mon, May 28, 2012 at 07:24:13PM +0100, Alejandro Carrillo wrote:> ¿How I can recover a row delete of a table that wasn't vacuummed?> I have PostgreSQL 9.1 in Windows XP/7.http://www.depesz.com/2012/04/04/lets-talk-dirty/Best regards,depesz-- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/",
"msg_date": "Tue, 29 May 2012 23:16:56 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "Hi,\n\nExcuse me if I was a bit rude, was not my intention. What happens is that I think the code should be someone to do cross-platform. So many could use it.\nDo you know another implementation that be cross-platform?\n\n\nThanks.\n\n\n\n\n>________________________________\n> De: hubert depesz lubaczewski <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>Enviado: Martes 29 de Mayo de 2012 17:21\n>Asunto: Re: [PERFORM] Recover rows deleted\n> \n>On Tue, May 29, 2012 at 11:16:56PM +0100, Alejandro Carrillo wrote:\n>> Hi friend,\n>> \n>> Your function doesn't compile in Windows.\n>> Please change it.\n>\n>My function? I just wrote about a module written by someone else - this\n>is clearly stated in the first line of the blogpost.\n>\n>And I doubt he will be interested in changing it so that it will work on\n>windows - we're not using it for anything, sorry.\n>\n>Best regards,\n>\n>depesz\n>\n>-- \n>The best thing about modern society is how easy it is to avoid contact with it.\n> http://depesz.com/\n>\n>\n>\nHi,Excuse me if I was a bit rude, was not my intention. What happens is that I think the code should be someone to do cross-platform. So many could use it.Do you know another implementation that be cross-platform?Thanks. De: hubert depesz lubaczewski <[email protected]> Para: Alejandro Carrillo <[email protected]> Enviado: Martes 29 de Mayo de 2012 17:21 Asunto: Re: [PERFORM] Recover rows deleted On Tue, May 29, 2012 at 11:16:56PM +0100, Alejandro Carrillo wrote:> Hi friend,> > Your function doesn't compile in\n Windows.> Please change it.My function? I just wrote about a module written by someone else - thisis clearly stated in the first line of the blogpost.And I doubt he will be interested in changing it so that it will work onwindows - we're not using it for anything, sorry.Best regards,depesz-- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/",
"msg_date": "Tue, 29 May 2012 23:34:26 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "Anybody knows a function that let's recover a record (row) deleted in Windows?\n\nThanks\n\n\n\n\n>________________________________\n> De: Jeff Davis <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]> \n>Enviado: Martes 29 de Mayo de 2012 15:53\n>Asunto: Re: [PERFORM] Recover rows deleted\n> \n>On Mon, 2012-05-28 at 19:24 +0100, Alejandro Carrillo wrote:\n>> Hi,\n>> \n>> \n>> ¿How I can recover a row delete of a table that wasn't vacuummed?\n>> I have PostgreSQL 9.1 in Windows XP/7.\n>\n>The first thing to do is shut down postgresql and take a full backup of\n>the data directory, including any archived WAL you might have (files in\n>pg_xlog). Make sure this is done first.\n>\n>Next, do you have any backups? If you have a base backup from before the\n>delete, and all the WAL files from the time of the base backup until\n>now, then you can try point-in-time recovery to the point right before\n>the data loss:\n>\n>http://www.postgresql.org/docs/9.1/static/continuous-archiving.html\n>\n>If not, are we talking about a single row, or many rows? If it's a\n>single row you might be able to do some manual steps, like examining the\n>pages to recover the data.\n>\n>Another option is to try pg_resetxlog (make sure you have a safe backup\n>first!):\n>\n>http://www.postgresql.org/docs/9.1/static/app-pgresetxlog.html\n>\n>And try setting the current transaction ID to just before the delete\n>ran. Then you may be able to use pg_dump or otherwise export the deleted\n>rows.\n>\n>Regards,\n> Jeff Davis\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\nAnybody knows a function that let's recover a record (row) deleted in Windows?Thanks De: Jeff Davis <[email protected]> Para: Alejandro Carrillo <[email protected]> CC: \"[email protected]\" <[email protected]>\n Enviado: Martes 29 de Mayo de 2012 15:53 Asunto: Re: [PERFORM] Recover rows deleted On Mon, 2012-05-28 at 19:24 +0100, Alejandro Carrillo wrote:> Hi,> > > ¿How I can recover a row delete of a table that wasn't vacuummed?> I have PostgreSQL 9.1 in Windows XP/7.The first thing to do is shut down postgresql and take a full backup ofthe data directory, including any archived WAL you might have (files inpg_xlog). Make sure this is done first.Next, do you have any backups? If you have a base backup from before thedelete, and all the WAL files from the time of the base backup untilnow, then you can try point-in-time recovery to the point right beforethe data loss:http://www.postgresql.org/docs/9.1/static/continuous-archiving.htmlIf not, are we talking about a single row, or many rows? If it's asingle row you might be able to do some manual steps, like examining thepages to recover the data.Another option is to try pg_resetxlog (make sure you have a safe backupfirst!):http://www.postgresql.org/docs/9.1/static/app-pgresetxlog.htmlAnd try setting the current transaction ID to just before the deleteran. Then you may be able to use pg_dump or otherwise export the deletedrows.Regards, Jeff Davis-- Sent via pgsql-performance mailing list ([email protected])To make changes to your\n subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Jun 2012 17:14:11 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "On 06/04/2012 11:14 AM, Alejandro Carrillo wrote:\n\n> Anybody knows a function that let's recover a record (row) deleted in\n> Windows?\n\nSorry Alejandro, I'm pretty sure no database anywhere has a function \nlike that. If there were, I'd certainly like to see it! Generally you \navoid situations like this by using transactions. If you do accidentally \ndelete a row, that's what backups are for.\n\nAgain, sorry to be the bearer of bad news.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 4 Jun 2012 11:43:16 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "In linux exists (https://github.com/omniti-labs/pgtreats/blob/master/contrib/pg_dirtyread/pg_dirtyread.c)\nBut I can't compile in Windows :(\nAnybody could compile in Windows?\n\n\n\n\n>________________________________\n> De: Shaun Thomas <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]> \n>Enviado: Lunes 4 de junio de 2012 11:43\n>Asunto: Re: [PERFORM] Recover rows deleted\n> \n>On 06/04/2012 11:14 AM, Alejandro Carrillo wrote:\n>\n>> Anybody knows a function that let's recover a record (row) deleted in\n>> Windows?\n>\n>Sorry Alejandro, I'm pretty sure no database anywhere has a function like that. If there were, I'd certainly like to see it! Generally you avoid situations like this by using transactions. If you do accidentally delete a row, that's what backups are for.\n>\n>Again, sorry to be the bearer of bad news.\n>\n>-- Shaun Thomas\n>OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n>312-444-8534\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n>\n>-- Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\nIn linux exists (https://github.com/omniti-labs/pgtreats/blob/master/contrib/pg_dirtyread/pg_dirtyread.c)But I can't compile in Windows :(Anybody could compile in Windows? De: Shaun Thomas <[email protected]> Para: Alejandro Carrillo <[email protected]> CC: \"[email protected]\" <[email protected]> Enviado: Lunes 4 de junio de 2012 11:43 Asunto: Re: [PERFORM] Recover rows deleted On 06/04/2012 11:14 AM, Alejandro Carrillo wrote:> Anybody knows a function that let's recover a record (row) deleted in> Windows?Sorry Alejandro, I'm pretty sure no database anywhere has a function like that. If there were, I'd certainly like to see it! Generally you avoid situations like this by using transactions. If you do accidentally delete a row, that's what backups are for.Again, sorry to be the bearer of bad news.-- Shaun ThomasOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, [email protected]______________________________________________See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Jun 2012 17:47:10 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "On Mon, Jun 4, 2012 at 11:47 AM, Alejandro Carrillo <[email protected]> wrote:\n> In linux exists\n> (https://github.com/omniti-labs/pgtreats/blob/master/contrib/pg_dirtyread/pg_dirtyread.c)\n> But I can't compile in Windows :(\n> Anybody could compile in Windows?\n\n\nThere are no linux specific calls in there that I can see-- it should\nbe a matter of compiling and installing it. Do you have a compiler?\nWhat issues are you having with compiling? It might just be a matter\nof setting up postgres build environment.\n\nmerlin\n",
"msg_date": "Mon, 4 Jun 2012 12:04:08 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "How I can compile in Windows? I tried to compile using Dev-C++ 4.9 and show a warning:\nCompilador: Default compiler\nBuilding Makefile: \"C:\\Documents and Settings\\Administrador\\Escritorio\\pg_dirtyread\\Makefile.win\"\nEjecutando make clean\nrm -f pg_dirtyread.o pg_dirtyread.a\n\ngcc.exe -c pg_dirtyread.c -o pg_dirtyread.o -I\"C:/Dev-Cpp/include\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/libpq\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port/win32\" -DBUILDING_DLL=1 -DHAVE_LONG_INT_64=1\n\nIn file included from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/c.h:851,\n from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/postgres.h:47,\n from pg_dirtyread.c:34:\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:191: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:195: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:200: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:204: warning: `gnu_printf' is an unrecognized format function type\n\nIn file included from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/postgres.h:48,\n from pg_dirtyread.c:34:\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:127: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:133: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:141: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:141: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:147: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:153: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:159: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:167: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:167: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:173: warning: `gnu_printf' is an unrecognized format function type\n\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:179: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:206: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:216: warning: `gnu_printf' is an unrecognized format function type\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:375: warning: `gnu_printf' is an unrecognized format function type\n\nIn file included from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/access/xlog.h:16,\n from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/access/heapam.h:20,\n from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/nodes/execnodes.h:18,\n from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/executor/execdesc.h:18,\n from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/executor/executor.h:17,\n from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/funcapi.h:21,\n\n from pg_dirtyread.c:35:\nC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/lib/stringinfo.h:98: warning: `gnu_printf' is an unrecognized format function type\n\nar r pg_dirtyread.a pg_dirtyread.o \n\nar: creating pg_dirtyread.a\n\nranlib pg_dirtyread.a\n\nEjecución Terminada\n\nI use the sources of binary downloaded of http://www.enterprisedb.com/products-services-training/pgbindownload.\n\nWhat I doing bad?\n\nThanks\n\n\n\n\n\n>________________________________\n> De: Merlin Moncure <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]>; \"[email protected]\" <[email protected]> \n>Enviado: Lunes 4 de junio de 2012 12:04\n>Asunto: Re: [PERFORM] Recover rows deleted\n> \n>On Mon, Jun 4, 2012 at 11:47 AM, Alejandro Carrillo <[email protected]> wrote:\n>> In linux exists\n>> (https://github.com/omniti-labs/pgtreats/blob/master/contrib/pg_dirtyread/pg_dirtyread.c)\n>> But I can't compile in Windows :(\n>> Anybody could compile in Windows?\n>\n>\n>There are no linux specific calls in there that I can see-- it should\n>be a matter of compiling and installing it. Do you have a compiler?\n>What issues are you having with compiling? It might just be a matter\n>of setting up postgres build environment.\n>\n>merlin\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\nHow I can compile in Windows? I tried to compile using Dev-C++ 4.9 and show a warning:Compilador: Default compilerBuilding Makefile: \"C:\\Documents and Settings\\Administrador\\Escritorio\\pg_dirtyread\\Makefile.win\"Ejecutando make cleanrm -f pg_dirtyread.o pg_dirtyread.agcc.exe -c pg_dirtyread.c -o pg_dirtyread.o -I\"C:/Dev-Cpp/include\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/libpq\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include\" -I\"C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port/win32\" -DBUILDING_DLL=1 -DHAVE_LONG_INT_64=1In file included from C:/postgresql-9.1.0-1-windows-binaries\n listo/pgsql/include/server/c.h:851, from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/postgres.h:47, from pg_dirtyread.c:34:C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:191: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:195: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:200: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/port.h:204: warning: `gnu_printf' is an unrecognized format function typeIn file included from\n C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/postgres.h:48, from pg_dirtyread.c:34:C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:127: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:133: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:141: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:141: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:147: warning: `gnu_printf' is an unrecognized format function\n typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:153: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:159: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:167: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:167: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:173: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:179: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries\n listo/pgsql/include/server/utils/elog.h:206: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:216: warning: `gnu_printf' is an unrecognized format function typeC:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/utils/elog.h:375: warning: `gnu_printf' is an unrecognized format function typeIn file included from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/access/xlog.h:16, from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/access/heapam.h:20, from C:/postgresql-9.1.0-1-windows-binaries\n listo/pgsql/include/server/nodes/execnodes.h:18, from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/executor/execdesc.h:18, from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/executor/executor.h:17, from C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/funcapi.h:21, from pg_dirtyread.c:35:C:/postgresql-9.1.0-1-windows-binaries listo/pgsql/include/server/lib/stringinfo.h:98: warning: `gnu_printf' is an unrecognized format function typear r pg_dirtyread.a pg_dirtyread.o ar: creating\n pg_dirtyread.aranlib pg_dirtyread.aEjecución TerminadaI use the sources of binary downloaded of http://www.enterprisedb.com/products-services-training/pgbindownload.What I doing bad?Thanks De: Merlin Moncure <[email protected]> Para: Alejandro Carrillo <[email protected]>\n CC: \"[email protected]\" <[email protected]>; \"[email protected]\" <[email protected]> Enviado: Lunes 4 de junio de 2012 12:04 Asunto: Re: [PERFORM] Recover rows deleted On Mon, Jun 4, 2012 at 11:47 AM, Alejandro Carrillo <[email protected]> wrote:> In linux exists> (https://github.com/omniti-labs/pgtreats/blob/master/contrib/pg_dirtyread/pg_dirtyread.c)> But I can't compile in Windows :(> Anybody could compile in Windows?There are no linux specific calls in there that I can see-- it\n shouldbe a matter of compiling and installing it. Do you have a compiler?What issues are you having with compiling? It might just be a matterof setting up postgres build environment.merlin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Jun 2012 18:46:21 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "On Mon, Jun 4, 2012 at 12:46 PM, Alejandro Carrillo <[email protected]> wrote:\n> How I can compile in Windows? I tried to compile using Dev-C++ 4.9 and show\n\nIt's probably going to take some extra effort to compile backend\nlibraries with that compiler. The two supported compiling\nenvironments on windows are mingw and visual studio. (that said, it's\nprobably doable if you're willing to expend the effort and have some\nexperience porting software).\n\nmerlin\n",
"msg_date": "Mon, 4 Jun 2012 13:23:42 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "ok, How I can compile in Windows using Visual Studio and Mingw?\n\n\n\n>________________________________\n> De: Merlin Moncure <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]>; \"[email protected]\" <[email protected]> \n>Enviado: Lunes 4 de junio de 2012 13:23\n>Asunto: Re: [PERFORM] Recover rows deleted\n> \n>On Mon, Jun 4, 2012 at 12:46 PM, Alejandro Carrillo <[email protected]> wrote:\n>> How I can compile in Windows? I tried to compile using Dev-C++ 4.9 and show\n>\n>It's probably going to take some extra effort to compile backend\n>libraries with that compiler. The two supported compiling\n>environments on windows are mingw and visual studio. (that said, it's\n>probably doable if you're willing to expend the effort and have some\n>experience porting software).\n>\n>merlin\n>\n>\n>\nok, How I can compile in Windows using Visual Studio and Mingw? De: Merlin Moncure <[email protected]> Para: Alejandro Carrillo <[email protected]> CC: \"[email protected]\" <[email protected]>; \"[email protected]\" <[email protected]> \nEnviado: Lunes 4 de junio de 2012 13:23 Asunto: Re: [PERFORM] Recover rows deleted On Mon, Jun 4, 2012 at 12:46 PM, Alejandro Carrillo <[email protected]> wrote:> How I can compile in Windows? I tried to compile using Dev-C++ 4.9 and showIt's probably going to take some extra effort to compile backendlibraries with that compiler. The two supported compilingenvironments on windows are mingw and visual studio. (that said, it'sprobably doable if you're willing to expend the effort and have someexperience porting software).merlin",
"msg_date": "Mon, 4 Jun 2012 19:37:00 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recover rows deleted"
},
{
"msg_contents": "Alejandro Carrillo <[email protected]> wrote:\n \n> How I can compile in Windows using Visual Studio and Mingw?\n \nhttp://www.postgresql.org/docs/current/interactive/install-windows.html\n \n-Kevin\n",
"msg_date": "Mon, 04 Jun 2012 13:46:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recover rows deleted"
}
] |
[
{
"msg_contents": "Hello,\n\ni use PostgreSQL 8.4.8 on Centos 5.x and i have some table where i load with pg_bulkload webtraffic logs, every day.\nAfter loading new data, i delete with a query 30-days older logs.\nAutovacuum is on.\n\nIn some systems, i experience a very huge slowdown of this table in queries, expecially update and delete; the database size grows up a lots.\nI tried to rebuild indexes and make a manual vacuum; database size reduces, but slowdown problems remains.\n\nAre there particular things to know after using pg_bulkload?\nIn this case, is preferred to set autovacuum off and launch one time a week, as example, the autovacuum full manually?\n\nThank you!\nFrancesco\n\n\n\n\n\n\n\n\nHello,\n \ni use PostgreSQL 8.4.8 on Centos 5.x and i have some table where i load with pg_bulkload webtraffic logs, every day.\nAfter loading new data, i delete with a query 30-days older logs.\nAutovacuum is on.\n \nIn some systems, i experience a very huge slowdown of this table in queries, expecially update and delete; the database size grows up a lots.\nI tried to rebuild indexes and make a manual vacuum; database size reduces, but slowdown problems remains.\n \nAre there particular things to know after using pg_bulkload?\nIn this case, is preferred to set autovacuum off and launch one time a week, as example, the autovacuum full manually?\n\nThank you!\nFrancesco",
"msg_date": "Tue, 29 May 2012 16:54:18 +0200",
"msg_from": "Job <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strong slowdown on huge tables"
},
{
"msg_contents": "Job <[email protected]> writes:\n> i use PostgreSQL 8.4.8 on Centos 5.x and i have some table where i load with pg_bulkload webtraffic logs, every day.\n> After loading new data, i delete with a query 30-days older logs.\n\nSince the deletion pattern is so predictable, you should consider\nsetting this table up as a partitioned table, so that you can just drop\nthe oldest partition instead of having to rely on DELETE+VACUUM to\nreclaim space. See\nhttp://www.postgresql.org/docs/8.4/static/ddl-partitioning.html\n\nAlternatively, look into whether including a manual VACUUM in your daily\nupdate script helps any.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 May 2012 13:42:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strong slowdown on huge tables "
}
] |
[
{
"msg_contents": "hi all - i'm having a bit of trouble with some queries that are\nrunning painfully slowly after migrating my database from one machine\nusing PostgreSQL 8.2 to another machine with PostgreSQL 8.4.\nas far as i can tell, the two *servers* (not the physical machines)\nare set up pretty close to identically, and as far as query planning\nis concerned, the only setting that seems to be different is\n'default_statistics_target', which is 10 on the 8.2 sever and 100 on\nthe 8.4 server (the defaults)... and presumably this should be giving\nthe 8.4 server more data to work with for better query plans (AFAIK).\n(other settings, e.g. cost estimates for page/row/etc access are\nidentical between the servers.)\ni VACUUMed and ANALYZEd on both databases (and all tables) prior to\nrunning any of the examples below.\n\ni've narrowed some of the queries down to a few simpler test cases and\nfound one example of a huge difference.\nhere's a basic query reproducing the results:\n\nexplain analyze select lt.l_id from lt where lt.t_name in (select\nt_name from lt x where x.l_id = 91032370) group by lt.l_id;\n\n[YES: i know i can use a join here. i extracted this from a much\nlarger query that originally had 7 joins, and to help the query\nplanner out a bit i managed to isolate one nested query that makes a\nlarge performance-increasing difference that the query planner was\nignoring... and for the purposes of exploring this behavior, though,\nthis serves as a nice example as it seems to be enforcing a nested\nquery.]\n\nhere's the analysis on the 8.2 machine:\n\nHashAggregate (cost=23240.77..23297.35 rows=5658 width=4) (actual\ntime=5515.065..5600.372 rows=279785 loops=1)\n -> Nested Loop (cost=3461.24..23226.63 rows=5658 width=4) (actual\ntime=134.604..5276.014 rows=304248 loops=1)\n -> HashAggregate (cost=3288.18..3288.19 rows=1 width=13)\n(actual time=0.097..0.120 rows=11 loops=1)\n -> Index Scan using listing__tag___listing_id on\nlisting__tag x (cost=0.00..3286.06 rows=851 width=13) (actual\ntime=0.069..0.084 rows=11 loops=1)\n Index Cond: (listing_id = 91032370)\n -> Bitmap Heap Scan on listing__tag lt\n(cost=173.06..19867.70 rows=5659 width=17) (actual\ntime=115.275..474.135 rows=27659 loops=11)\n Recheck Cond: (lt.tag_name = x.tag_name)\n -> Bitmap Index Scan on listing__tag___tag_name\n(cost=0.00..171.64 rows=5659 width=0) (actual time=113.595..113.595\nrows=27659 loops=11)\n Index Cond: (lt.tag_name = x.tag_name)\n Total runtime: 5615.036 ms\n\nsame query on the 8.4 machine, trial 1 (almost exactly the same\ndata... just (very) slightly fewer records):\n\nGroup (cost=782153.70..807638.31 rows=516717 width=4) (actual\ntime=184264.479..184434.087 rows=275876 loops=1)\n -> Sort (cost=782153.70..794896.00 rows=5096921 width=4) (actual\ntime=184264.476..184353.314 rows=299992 loops=1)\n Sort Key: lt.listing_id\n Sort Method: external merge Disk: 4096kB\n -> Nested Loop (cost=306.17..5271.26 rows=5096921 width=4)\n(actual time=126.267..183408.035 rows=299992 loops=1)\n -> HashAggregate (cost=270.42..270.43 rows=1\nwidth=10) (actual time=57.744..57.771 rows=11 loops=1)\n -> Index Scan using listing__tag___listing_id on\nlisting__tag x (cost=0.00..270.25 rows=68 width=10) (actual\ntime=57.728..57.731 rows=11 loops=1)\n Index Cond: (listing_id = 91032370)\n -> Bitmap Heap Scan on listing__tag lt\n(cost=35.75..4983.92 rows=1353 width=14) (actual\ntime=59.723..16652.856 rows=27272 loops=11)\n Recheck Cond: (lt.tag_name = x.tag_name)\n -> Bitmap Index Scan on listing__tag___tag_name\n(cost=0.00..35.41 rows=1353 width=0) (actual time=58.036..58.036\nrows=27272 loops=11)\n Index Cond: (lt.tag_name = x.tag_name)\n Total runtime: 184455.567 ms\n\n3 seconds vs 3 minutes!\nso i obviously first noticed that the nested loop was taking up all\nthe time... though that part of the query is identical between the two\nservers.\nanother noticeable difference is that the estimates on number of rows\nare better in the 8.4 query (estimated around 500,000 vs 5,000, and\nactual number of rows was around 300,000).\nthe only difference in query plans is the choice to Group->Sort vs\nHashAggregate, but both of those operate on the Nested Loop results.\ndespite seeing all the time being spent on the Nested Loop, i also\nnoticed that it was sorting on disk... so i upped the work_mem from\nthe 1 MB default to 8 MB.\ni then re-ran the 8.4 query, and got these times (for the same query):\n\nGroup (cost=642783.70..668268.31 rows=516717 width=4) (actual\ntime=1946.838..2102.696 rows=275876 loops=1)\n -> Sort (cost=642783.70..655526.00 rows=5096921 width=4) (actual\ntime=1946.835..2023.099 rows=299992 loops=1)\n Sort Key: lt.listing_id\n Sort Method: external merge Disk: 4096kB\n -> Nested Loop (cost=306.17..5271.26 rows=5096921 width=4)\n(actual time=4.336..1518.962 rows=299992 loops=1)\n -> HashAggregate (cost=270.42..270.43 rows=1\nwidth=10) (actual time=0.069..0.089 rows=11 loops=1)\n -> Index Scan using listing__tag___listing_id on\nlisting__tag x (cost=0.00..270.25 rows=68 width=10) (actual\ntime=0.052..0.058 rows=11 loops=1)\n Index Cond: (listing_id = 91032370)\n -> Bitmap Heap Scan on listing__tag lt\n(cost=35.75..4983.92 rows=1353 width=14) (actual time=11.585..130.841\nrows=27272 loops=11)\n Recheck Cond: (lt.tag_name = x.tag_name)\n -> Bitmap Index Scan on listing__tag___tag_name\n(cost=0.00..35.41 rows=1353 width=0) (actual time=6.588..6.588\nrows=27272 loops=11)\n Index Cond: (lt.tag_name = x.tag_name)\n Total runtime: 2123.992 ms\n\n2 seconds!\ngreat! kind of.\nthis really just raised a whole series of questions now that are going\nto bother me for my future query-writing purposes.\n\n1) obvious basic question: why, after raising work_mem to 8MB is the\nsort still happening on disk? or, is it happening on disk (despite\nbeing reported as so).\n\n2) the Nested Loop time dropped dramatically (basically all the time\nsavings)... but i had no warning prior to this that the work_mem limit\nwas being hit inside that Nested Loop.\nis there any way to see which steps are requiring additional disk IO\nfor processing... similar to the reported \"external merge\" for the\nSort Method?\n\n3) here's the biggest problem/issue in my brain: work_mem on the 8.2\nserver was also set to the 1 MB default! but ran quite speedily!\nthe full migration will take a while, so there will still be query\ndevelopment/optimization on one system, and i'd love for those many\nhours testing to be worth something when ported over to the other\nsystem.\nin this particular example, the Nested Loop seems to fit in the 1 MB\nwork_mem space on the 8.2 server, but not the 8.4? does this seem\nright to anybody?\n\n4) i thought, maybe i'm reading the ANALYZE report incorrectly, and\nthe true bottleneck in the first 8.4 query was indeed the on-disk\nsorting step (which would make sense). and upping the work_mem limit\nwould alleviate that (which would also make sense).\nbut then why is the second 8.4 query still reporting a sort on disk,\ndespite the giant speed boost?\n\n5) and finally, if indeed the sort is happening on disk in the first\n8.4 query, will the query planner consider work_mem as part of the\nplanning process?\ni.e. will a slightly-faster (in terms of data page reads) be\ndowngraded if it looks like it'll require a sort that exceeds work_mem\nwhen a slightly slower plan that's expected to fit in work_mem is\navailable?\n\nany insights here?\n\ncheers!\n\n-murat\n",
"msg_date": "Wed, 30 May 2012 13:57:43 -0400",
"msg_from": "Murat Tasan <[email protected]>",
"msg_from_op": true,
"msg_subject": "does the query planner consider work_mem?"
},
{
"msg_contents": "On Wed, May 30, 2012 at 8:57 PM, Murat Tasan <[email protected]> wrote:\n> any insights here?\n\nHave you tried running the slow option multiple times? According to\nthe explain output all of the time was accounted to the bitmap heap\nscan. For the second explain plan the same node was fast. It looks to\nme as the first explain on 8.4 was slow because the data was still on\ndisk. Raising work mem doubled the speed of the sort from 800ms to\n400ms.\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Wed, 30 May 2012 21:25:53 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does the query planner consider work_mem?"
},
{
"msg_contents": "Ants -- you're on the right track: i tried your suggestion and found\nthat at times during subsequent executions the performance will drop\ndown to about 6 seconds.\nthough periodically it jumps back up to about 3 minutes, and there's\nno other DB server usage during these times (i.e. i'm the only one\nconnected).\ni should note, however, that the 8.2 version has not once been slow\nwith this query.\n\nso it may be a cache issue, though i have other queries, also slow in\nthe 8.4 version, and continue to be slow, no matter how many times i\nre-run them.\nin most cases (so far) i'm able to re-write them to be faster in the\n8.4 version, but it's a different formulation than the 8.2 version.\n(i.e. if i take the 8.2 version and run it on 8.4 it's slow and\nvice-versa).\nthis means i need to maintain/test two versions of each query during\nthe migration, which is a nightmare for me.\n(BTW -- forgot to mention this, but the listing__tag table in the\nexamples has ~35 million rows, and there's an index on each column.)\n\nas an example: i re-wrote the query to use the full join version, and\non the 8.4 version (after a fresh restart of the server) the plan was\nthe same as before:\n\nexplain analyze select lt.listing_id from listing__tag lt,\nlisting__tag x where lt.tag_name = x.tag_name and x.listing_id =\n91032370 group by lt.listing_id;\n\nGroup (cost=485411.21..490831.04 rows=488868 width=4) (actual\ntime=5474.662..5636.718 rows=272166 loops=1)\n -> Sort (cost=485411.21..488121.13 rows=1083967 width=4) (actual\ntime=5474.658..5560.040 rows=295990 loops=1)\n Sort Key: lt.listing_id\n Sort Method: external merge Disk: 4048kB\n -> Nested Loop (cost=35.44..347109.96 rows=1083967 width=4)\n(actual time=3.908..5090.687 rows=295990 loops=1)\n -> Index Scan using listing__tag___listing_id on\nlisting__tag x (cost=0.00..283.44 rows=71 width=10) (actu\nal time=0.050..0.086 rows=11 loops=1)\n Index Cond: (listing_id = 91032370)\n -> Bitmap Heap Scan on listing__tag lt\n(cost=35.44..4868.36 rows=1322 width=14) (actual time=8.664..456.08\n7 rows=26908 loops=11)\n Recheck Cond: (lt.tag_name = x.tag_name)\n -> Bitmap Index Scan on listing__tag___tag_name\n(cost=0.00..35.11 rows=1322 width=0) (actual time=7.\n065..7.065 rows=26908 loops=11)\n Index Cond: (lt.tag_name = x.tag_name)\n Total runtime: 5656.900 ms\n\nthis top-level join query on the 8.2 machine, despite there being only\na single join, performs abysmally (which is why i had to coerce the\n8.2 query planner to do the correct nesting in the first place):\n\nGroup (cost=4172480.61..4232829.15 rows=37744 width=4)\n -> Sort (cost=4172480.61..4202654.88 rows=12069709 width=4)\n Sort Key: lt.listing_id\n -> Hash Join (cost=1312642.10..1927697.87 rows=12069709 width=4)\n Hash Cond: (x.tag_name = lt.tag_name)\n -> Index Scan using listing__tag___listing_id on\nlisting__tag x (cost=0.00..3682.79 rows=951 width=13)\n Index Cond: (listing_id = 91032370)\n -> Hash (cost=613609.60..613609.60 rows=36151960 width=17)\n -> Seq Scan on listing__tag lt\n(cost=0.00..613609.60 rows=36151960 width=17)\n\n(only EXPLAIN here on this query as i stopped the first attempt to\nEXPLAIN ANALYZE after about 15 minutes :-/ )\n\ncheers,\n\n-m\n\np.s. on the 8.4 version EXPLAIN ANALYZE *still* tells me that an\nexternal merge on disk is happening, despite my setting of work_mem to\na full 16 MB this time.\ndoes anyone know how to resolve this? or should i even worry about it?\n\n\nOn Wed, May 30, 2012 at 2:25 PM, Ants Aasma <[email protected]> wrote:\n> On Wed, May 30, 2012 at 8:57 PM, Murat Tasan <[email protected]> wrote:\n>> any insights here?\n>\n> Have you tried running the slow option multiple times? According to\n> the explain output all of the time was accounted to the bitmap heap\n> scan. For the second explain plan the same node was fast. It looks to\n> me as the first explain on 8.4 was slow because the data was still on\n> disk. Raising work mem doubled the speed of the sort from 800ms to\n> 400ms.\n>\n> Regards,\n> Ants Aasma\n> --\n> Cybertec Schönig & Schönig GmbH\n> Gröhrmühlgasse 26\n> A-2700 Wiener Neustadt\n> Web: http://www.postgresql-support.de\n",
"msg_date": "Wed, 30 May 2012 16:30:33 -0400",
"msg_from": "Murat Tasan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does the query planner consider work_mem?"
},
{
"msg_contents": "On 31/05/12 05:57, Murat Tasan wrote:\n> hi all - i'm having a bit of trouble with some queries that are\n> running painfully slowly after migrating my database from one machine\n> using PostgreSQL 8.2 to another machine with PostgreSQL 8.4.\n> as far as i can tell, the two *servers* (not the physical machines)\n> are set up pretty close to identically, and as far as query planning\n> is concerned, the only setting that seems to be different is\n> 'default_statistics_target', which is 10 on the 8.2 sever and 100 on\n> the 8.4 server (the defaults)... and presumably this should be giving\n> the 8.4 server more data to work with for better query plans (AFAIK).\n> (other settings, e.g. cost estimates for page/row/etc access are\n> identical between the servers.)\n\nIt would probably be useful know what release of 8.4 you have - i.e \n8.4.x. There were some significant planner changes at 8.4.9 or thereabouts.\n\nI think it would also be useful to know all of your non default \nparameters for 8.4 (SELECT name,setting FROM pg_settings WHERE source != \n'default').\n\n> 3) here's the biggest problem/issue in my brain: work_mem on the 8.2\n> server was also set to the 1 MB default! but ran quite speedily!\n> the full migration will take a while, so there will still be query\n> development/optimization on one system, and i'd love for those many\n> hours testing to be worth something when ported over to the other\n> system.\n> in this particular example, the Nested Loop seems to fit in the 1 MB\n> work_mem space on the 8.2 server, but not the 8.4? does this seem\n> right to anybody?\n>\n>\n>\n\nWell 8.4 has 100 stats buckets to get distribution info, so typically \nhas a better idea about things, however sometimes more info is just \nenough to tip the planner into believing that it needs more space to do \nsomething. The other possibility is that the 8.2 box is 32-bit and the \n8.4 one is 64-bit and really does need more memory to hold the loop data \nstructures.\n\nRegards\n\nMark\n",
"msg_date": "Fri, 01 Jun 2012 10:59:17 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does the query planner consider work_mem?"
},
{
"msg_contents": "Murat Tasan <[email protected]> writes:\n> p.s. on the 8.4 version EXPLAIN ANALYZE *still* tells me that an\n> external merge on disk is happening, despite my setting of work_mem to\n> a full 16 MB this time.\n\nWhen we have to push sort data to disk, it's written in a form that's\nnoticeably more compact than what's used in-memory. So it can be\nexpected that an in-memory sort will need significantly more memory\nthan what an external merge reports using on-disk. I'd try cranking\nup work_mem to at least 10X the on-disk size if you want to be sure\nof seeing an in-memory sort.\n\nHowever, so far as I can see the sort is taking much less time than\nthe scan and join steps anyway, so you're not likely to get much\nimprovement this way. The unstable performance is a result of caching\neffects for the table data, not of how the sort is done.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jun 2012 11:15:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does the query planner consider work_mem? "
}
] |
[
{
"msg_contents": "We are having trouble with a particular query being slow in a strange manner.\n\nThe query is a join over two large tables that are suitably indexed.\n\nselect CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, \nCI.NEWSTRING\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=? order by \nCG.CREATED asc, CI.ID asc\n\nFor some tasks we run this particular query a very large number of times and it has a significant performance impact \nwhen it runs slowly.\n\nIf we run ANALYSE over the CHANGEITEM table then the performance picks up by a factor of 5 or more. The problem is that \na day later the performance will have dropped back to its previously slow state.\n\nThe reason this is so hard to understand is that the activity on this table is very low, with no updates and only a \nrelatively small number of inserts each day, < 0.1% of the table size.\n\nExplain output:\nSort (cost=86.90..86.93 rows=11 width=118) (actual time=0.086..0.087 rows=14 loops=1)\n Sort Key: cg.created, ci.id\n Sort Method: quicksort Memory: 26kB\n -> Nested Loop (cost=0.00..86.71 rows=11 width=118) (actual time=0.022..0.061 rows=14 loops=1)\n -> Index Scan using chggroup_issue on changegroup cg (cost=0.00..17.91 rows=8 width=33) (actual \ntime=0.012..0.015 rows=7 loops=1)\n Index Cond: (issueid = 81001::numeric)\n -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 rows=2 width=91) (actual \ntime=0.005..0.005 rows=2 loops=7)\n Index Cond: (groupid = cg.id)\nTotal runtime: 0.116 ms\n\nThe explain output always seems the same even when the performance is poor, but I can't be sure of that.\n\nOverall it seems like PostgreSQL just forgets about the statistics it has gathered after a short while.\n\nSchema details:\nCREATE TABLE changegroup\n(\n id numeric(18,0) NOT NULL,\n issueid numeric(18,0),\n author character varying(255),\n created timestamp with time zone,\n CONSTRAINT pk_changegroup PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\nCREATE INDEX chggroup_issue\n ON changegroup\n USING btree\n (issueid );\n\nCREATE TABLE changeitem\n(\n id numeric(18,0) NOT NULL,\n groupid numeric(18,0),\n fieldtype character varying(255),\n field character varying(255),\n oldvalue text,\n oldstring text,\n newvalue text,\n newstring text,\n CONSTRAINT pk_changeitem PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX chgitem_chggrp\n ON changeitem\n USING btree\n (groupid );\n\nCREATE INDEX chgitem_field\n ON changeitem\n USING btree\n (field COLLATE pg_catalog.\"default\" );\n\nTable sizes\nchangegroup : 2,000,000 rows\nchangeitem : 2,500,000 rows\n\nThe changegroup table has on average about 4 rows per issueid value, which is the query parameter.\n\nWe run autovacuum and autoanalyse, but as the activity in the table is low these are rarely if ever invoked on these tables.\n\nEnvironment.\nTesting using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, although this is a problem across a variety of postgres \nversions.\n\n\n\n\n\n\n\nWe are having trouble with a particular\n query being slow in a strange manner.\n\n The query is a join over two large tables that are suitably\n indexed.\n\n select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID,\n CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE,\n CI.NEWSTRING \n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on\n CG.ID = CI.GROUPID where CG.ISSUEID=? order by CG.CREATED asc,\n CI.ID asc\n\n For some tasks we run this particular query a very large number of\n times and it has a significant performance impact when it runs\n slowly.\n\n If we run ANALYSE over the CHANGEITEM\ntable then the performance picks up by a\n factor of 5 or more. The problem is that a day later the\n performance will have dropped back to its previously slow state.\n\n The reason this is so hard to understand is that the activity on\n this table is very low, with no updates and only a relatively\n small number of inserts each day, < 0.1% of the table size.\n\n Explain output:\n Sort (cost=86.90..86.93 rows=11 width=118) (actual\n time=0.086..0.087 rows=14 loops=1)\n Sort Key: cg.created, ci.id\n Sort Method: quicksort Memory: 26kB\n -> Nested Loop (cost=0.00..86.71 rows=11 width=118) (actual\n time=0.022..0.061 rows=14 loops=1)\n -> Index Scan using chggroup_issue on changegroup cg \n (cost=0.00..17.91 rows=8 width=33) (actual time=0.012..0.015\n rows=7 loops=1)\n Index Cond: (issueid = 81001::numeric)\n -> Index Scan using chgitem_chggrp on changeitem ci \n (cost=0.00..8.58 rows=2 width=91) (actual time=0.005..0.005 rows=2\n loops=7)\n Index Cond: (groupid = cg.id)\n Total runtime: 0.116 ms\n\n The explain output always seems the same even when the performance\n is poor, but I can't be sure of that.\n\n Overall it seems like PostgreSQL just forgets about the statistics\n it has gathered after a short while.\n\n Schema details:\n CREATE TABLE changegroup\n (\n id numeric(18,0) NOT NULL,\n issueid numeric(18,0),\n author character varying(255),\n created timestamp with time zone,\n CONSTRAINT pk_changegroup PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n CREATE INDEX chggroup_issue\n ON changegroup\n USING btree\n (issueid );\n\n CREATE TABLE changeitem\n (\n id numeric(18,0) NOT NULL,\n groupid numeric(18,0),\n fieldtype character varying(255),\n field character varying(255),\n oldvalue text,\n oldstring text,\n newvalue text,\n newstring text,\n CONSTRAINT pk_changeitem PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n\n CREATE INDEX chgitem_chggrp\n ON changeitem\n USING btree\n (groupid );\n\n CREATE INDEX chgitem_field\n ON changeitem\n USING btree\n (field COLLATE pg_catalog.\"default\" );\n\n Table sizes\nchangegroup : 2,000,000\n rows\n changeitem : 2,500,000 rows\n\n The changegroup table has on average about 4 rows per issueid value,\n which is the query parameter.\n\n We run autovacuum and autoanalyse, but as the activity in the table\n is low these are rarely if ever invoked on these tables.\n\n Environment.\n Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, although\n this is a problem across a variety of postgres versions.",
"msg_date": "Fri, 01 Jun 2012 08:29:47 +1000",
"msg_from": "Trevor Campbell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "On Thu, May 31, 2012 at 3:29 PM, Trevor Campbell <[email protected]>wrote:\n\n> We are having trouble with a particular query being slow in a strange\n> manner.\n>\n> The query is a join over two large tables that are suitably indexed.\n>\n> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE,\n> CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID =\n> CI.GROUPID where CG.ISSUEID=? order by CG.CREATED asc, CI.ID asc\n>\n\nThis has an unbound variable '?' in it.\n\n\n> For some tasks we run this particular query a very large number of times\n> and it has a significant performance impact when it runs slowly.\n>\n> If we run ANALYSE over the CHANGEITEM table then the performance picks up\n> by a factor of 5 or more. The problem is that a day later the performance\n> will have dropped back to its previously slow state.\n>\n> The reason this is so hard to understand is that the activity on this\n> table is very low, with no updates and only a relatively small number of\n> inserts each day, < 0.1% of the table size.\n>\n> Explain output:\n> Sort (cost=86.90..86.93 rows=11 width=118) (actual time=0.086..0.087\n> rows=14 loops=1)\n> Sort Key: cg.created, ci.id\n> Sort Method: quicksort Memory: 26kB\n> -> Nested Loop (cost=0.00..86.71 rows=11 width=118) (actual\n> time=0.022..0.061 rows=14 loops=1)\n> -> Index Scan using chggroup_issue on changegroup cg\n> (cost=0.00..17.91 rows=8 width=33) (actual time=0.012..0.015 rows=7 loops=1)\n> Index Cond: (issueid = 81001::numeric)\n> -> Index Scan using chgitem_chggrp on changeitem ci\n> (cost=0.00..8.58 rows=2 width=91) (actual time=0.005..0.005 rows=2 loops=7)\n> Index Cond: (groupid = cg.id)\n> Total runtime: 0.116 ms\n>\n\nWhat's the exact SQL you used to get this ... did you use a specific\nCG.ISSUEID to run your test? If that's the case, this EXPLAIN ANALYZE\nwon't be the same as the one generated for your actual application.\n\nCraig\n\n\n\n>\n> The explain output always seems the same even when the performance is\n> poor, but I can't be sure of that.\n>\n> Overall it seems like PostgreSQL just forgets about the statistics it has\n> gathered after a short while.\n>\n> Schema details:\n> CREATE TABLE changegroup\n> (\n> id numeric(18,0) NOT NULL,\n> issueid numeric(18,0),\n> author character varying(255),\n> created timestamp with time zone,\n> CONSTRAINT pk_changegroup PRIMARY KEY (id )\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> CREATE INDEX chggroup_issue\n> ON changegroup\n> USING btree\n> (issueid );\n>\n> CREATE TABLE changeitem\n> (\n> id numeric(18,0) NOT NULL,\n> groupid numeric(18,0),\n> fieldtype character varying(255),\n> field character varying(255),\n> oldvalue text,\n> oldstring text,\n> newvalue text,\n> newstring text,\n> CONSTRAINT pk_changeitem PRIMARY KEY (id )\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX chgitem_chggrp\n> ON changeitem\n> USING btree\n> (groupid );\n>\n> CREATE INDEX chgitem_field\n> ON changeitem\n> USING btree\n> (field COLLATE pg_catalog.\"default\" );\n>\n> Table sizes\n> changegroup : 2,000,000 rows\n> changeitem : 2,500,000 rows\n>\n> The changegroup table has on average about 4 rows per issueid value, which\n> is the query parameter.\n>\n> We run autovacuum and autoanalyse, but as the activity in the table is low\n> these are rarely if ever invoked on these tables.\n>\n> Environment.\n> Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, although this\n> is a problem across a variety of postgres versions.\n>\n>\n\nOn Thu, May 31, 2012 at 3:29 PM, Trevor Campbell <[email protected]> wrote:\n\nWe are having trouble with a particular\n query being slow in a strange manner.\n\n The query is a join over two large tables that are suitably\n indexed.\n\n select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID,\n CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE,\n CI.NEWSTRING \n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on\n CG.ID = CI.GROUPID where CG.ISSUEID=? order by CG.CREATED asc,\n CI.ID ascThis has an unbound variable '?' in it.\n\n\n For some tasks we run this particular query a very large number of\n times and it has a significant performance impact when it runs\n slowly.\n\n If we run ANALYSE over the CHANGEITEM\ntable then the performance picks up by a\n factor of 5 or more. The problem is that a day later the\n performance will have dropped back to its previously slow state.\n\n The reason this is so hard to understand is that the activity on\n this table is very low, with no updates and only a relatively\n small number of inserts each day, < 0.1% of the table size.\n\n Explain output:\n Sort (cost=86.90..86.93 rows=11 width=118) (actual\n time=0.086..0.087 rows=14 loops=1)\n Sort Key: cg.created, ci.id\n Sort Method: quicksort Memory: 26kB\n -> Nested Loop (cost=0.00..86.71 rows=11 width=118) (actual\n time=0.022..0.061 rows=14 loops=1)\n -> Index Scan using chggroup_issue on changegroup cg \n (cost=0.00..17.91 rows=8 width=33) (actual time=0.012..0.015\n rows=7 loops=1)\n Index Cond: (issueid = 81001::numeric)\n -> Index Scan using chgitem_chggrp on changeitem ci \n (cost=0.00..8.58 rows=2 width=91) (actual time=0.005..0.005 rows=2\n loops=7)\n Index Cond: (groupid = cg.id)\n Total runtime: 0.116 msWhat's the exact SQL you used to get this ... did you use a specific CG.ISSUEID to run your test? If that's the case, this EXPLAIN ANALYZE won't be the same as the one generated for your actual application.\nCraig \n\n The explain output always seems the same even when the performance\n is poor, but I can't be sure of that.\n\n Overall it seems like PostgreSQL just forgets about the statistics\n it has gathered after a short while.\n\n Schema details:\n CREATE TABLE changegroup\n (\n id numeric(18,0) NOT NULL,\n issueid numeric(18,0),\n author character varying(255),\n created timestamp with time zone,\n CONSTRAINT pk_changegroup PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n CREATE INDEX chggroup_issue\n ON changegroup\n USING btree\n (issueid );\n\n CREATE TABLE changeitem\n (\n id numeric(18,0) NOT NULL,\n groupid numeric(18,0),\n fieldtype character varying(255),\n field character varying(255),\n oldvalue text,\n oldstring text,\n newvalue text,\n newstring text,\n CONSTRAINT pk_changeitem PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n\n CREATE INDEX chgitem_chggrp\n ON changeitem\n USING btree\n (groupid );\n\n CREATE INDEX chgitem_field\n ON changeitem\n USING btree\n (field COLLATE pg_catalog.\"default\" );\n\n Table sizes\nchangegroup : 2,000,000\n rows\n changeitem : 2,500,000 rows\n\n The changegroup table has on average about 4 rows per issueid value,\n which is the query parameter.\n\n We run autovacuum and autoanalyse, but as the activity in the table\n is low these are rarely if ever invoked on these tables.\n\n Environment.\n Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, although\n this is a problem across a variety of postgres versions.",
"msg_date": "Thu, 31 May 2012 15:55:21 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "On 01/06/12 08:55, Craig James wrote:\n>\n>\n> On Thu, May 31, 2012 at 3:29 PM, Trevor Campbell <[email protected] <mailto:[email protected]>> wrote:\n>\n> We are having trouble with a particular query being slow in a strange manner.\n>\n> The query is a join over two large tables that are suitably indexed.\n>\n> select CG.ID <http://CG.ID>, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID <http://CI.ID>, CI.FIELDTYPE, CI.FIELD,\n> CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID <http://CG.ID> = CI.GROUPID where\n> CG.ISSUEID=? order by CG.CREATED asc, CI.ID <http://CI.ID> asc\n>\n>\n> This has an unbound variable '?' in it.\nThese queries are being run from a java application using JDBC and when run the variable is bound to an long integer \nvalue. While trying to investigate the problem, I have been just hard coding a value in the statement.\n>\n>\n> For some tasks we run this particular query a very large number of times and it has a significant performance\n> impact when it runs slowly.\n>\n> If we run ANALYSE over the CHANGEITEM table then the performance picks up by a factor of 5 or more. The problem\n> is that a day later the performance will have dropped back to its previously slow state.\n>\n> The reason this is so hard to understand is that the activity on this table is very low, with no updates and only\n> a relatively small number of inserts each day, < 0.1% of the table size.\n>\n> Explain output:\n> Sort (cost=86.90..86.93 rows=11 width=118) (actual time=0.086..0.087 rows=14 loops=1)\n> Sort Key: cg.created, ci.id <http://ci.id>\n> Sort Method: quicksort Memory: 26kB\n> -> Nested Loop (cost=0.00..86.71 rows=11 width=118) (actual time=0.022..0.061 rows=14 loops=1)\n> -> Index Scan using chggroup_issue on changegroup cg (cost=0.00..17.91 rows=8 width=33) (actual\n> time=0.012..0.015 rows=7 loops=1)\n> Index Cond: (issueid = 81001::numeric)\n> -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 rows=2 width=91) (actual\n> time=0.005..0.005 rows=2 loops=7)\n> Index Cond: (groupid = cg.id <http://cg.id>)\n> Total runtime: 0.116 ms\n>\n>\n> What's the exact SQL you used to get this ... did you use a specific CG.ISSUEID to run your test? If that's the case, \n> this EXPLAIN ANALYZE won't be the same as the one generated for your actual application.\n>\n> Craig\n>\n>\n> The explain output always seems the same even when the performance is poor, but I can't be sure of that.\n>\n> Overall it seems like PostgreSQL just forgets about the statistics it has gathered after a short while.\n>\n> Schema details:\n> CREATE TABLE changegroup\n> (\n> id numeric(18,0) NOT NULL,\n> issueid numeric(18,0),\n> author character varying(255),\n> created timestamp with time zone,\n> CONSTRAINT pk_changegroup PRIMARY KEY (id )\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> CREATE INDEX chggroup_issue\n> ON changegroup\n> USING btree\n> (issueid );\n>\n> CREATE TABLE changeitem\n> (\n> id numeric(18,0) NOT NULL,\n> groupid numeric(18,0),\n> fieldtype character varying(255),\n> field character varying(255),\n> oldvalue text,\n> oldstring text,\n> newvalue text,\n> newstring text,\n> CONSTRAINT pk_changeitem PRIMARY KEY (id )\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX chgitem_chggrp\n> ON changeitem\n> USING btree\n> (groupid );\n>\n> CREATE INDEX chgitem_field\n> ON changeitem\n> USING btree\n> (field COLLATE pg_catalog.\"default\" );\n>\n> Table sizes\n> changegroup : 2,000,000 rows\n> changeitem : 2,500,000 rows\n>\n> The changegroup table has on average about 4 rows per issueid value, which is the query parameter.\n>\n> We run autovacuum and autoanalyse, but as the activity in the table is low these are rarely if ever invoked on\n> these tables.\n>\n> Environment.\n> Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, although this is a problem across a variety of\n> postgres versions.\n>\n>\n\n\n\n\n\n\n On 01/06/12 08:55, Craig James wrote:\n \n\nOn Thu, May 31, 2012 at 3:29 PM, Trevor\n Campbell <[email protected]>\n wrote:\n\n We are having trouble with a particular query being\n slow in a strange manner.\n\n The query is a join over two large tables that are\n suitably indexed.\n\n select CG.ID, CG.ISSUEID, CG.AUTHOR,\n CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD,\n CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING \n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM\n CI on CG.ID = CI.GROUPID where\n CG.ISSUEID=? order by CG.CREATED asc, CI.ID asc\n\n\n\n This has an unbound variable '?' in it.\n\n\n\n These queries are being run from a java application using JDBC and\n when run the variable is bound to an long integer value. While\n trying to investigate the problem, I have been just hard coding a\n value in the statement.\n\n\n\n\n\n \n For some tasks we run this particular query a very large\n number of times and it has a significant performance\n impact when it runs slowly.\n\n If we run ANALYSE over the CHANGEITEM table\n then the performance picks up by a factor of 5 or more. \n The problem is that a day later the performance will have\n dropped back to its previously slow state.\n\n The reason this is so hard to understand is that the\n activity on this table is very low, with no updates and\n only a relatively small number of inserts each day, <\n 0.1% of the table size.\n\n Explain output:\n Sort (cost=86.90..86.93 rows=11 width=118) (actual\n time=0.086..0.087 rows=14 loops=1)\n Sort Key: cg.created, ci.id\n Sort Method: quicksort Memory: 26kB\n -> Nested Loop (cost=0.00..86.71 rows=11 width=118)\n (actual time=0.022..0.061 rows=14 loops=1)\n -> Index Scan using chggroup_issue on\n changegroup cg (cost=0.00..17.91 rows=8 width=33) (actual\n time=0.012..0.015 rows=7 loops=1)\n Index Cond: (issueid = 81001::numeric)\n -> Index Scan using chgitem_chggrp on\n changeitem ci (cost=0.00..8.58 rows=2 width=91) (actual\n time=0.005..0.005 rows=2 loops=7)\n Index Cond: (groupid = cg.id)\n Total runtime: 0.116 ms\n\n\n\n What's the exact SQL you used to get this ... did you use a\n specific CG.ISSUEID to run your test? If that's the case,\n this EXPLAIN ANALYZE won't be the same as the one generated\n for your actual application.\n\n Craig\n\n \n\n \n The explain output always seems the same even when the\n performance is poor, but I can't be sure of that.\n\n Overall it seems like PostgreSQL just forgets about the\n statistics it has gathered after a short while.\n\n Schema details:\n CREATE TABLE changegroup\n (\n id numeric(18,0) NOT NULL,\n issueid numeric(18,0),\n author character varying(255),\n created timestamp with time zone,\n CONSTRAINT pk_changegroup PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n CREATE INDEX chggroup_issue\n ON changegroup\n USING btree\n (issueid );\n\n CREATE TABLE changeitem\n (\n id numeric(18,0) NOT NULL,\n groupid numeric(18,0),\n fieldtype character varying(255),\n field character varying(255),\n oldvalue text,\n oldstring text,\n newvalue text,\n newstring text,\n CONSTRAINT pk_changeitem PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n\n CREATE INDEX chgitem_chggrp\n ON changeitem\n USING btree\n (groupid );\n\n CREATE INDEX chgitem_field\n ON changeitem\n USING btree\n (field COLLATE pg_catalog.\"default\" );\n\n Table sizes\nchangegroup : 2,000,000\n\n rows\n changeitem : 2,500,000 rows\n\n The changegroup table has on average about 4 rows per\n issueid value, which is the query parameter.\n\n We run autovacuum and autoanalyse, but as the activity in\n the table is low these are rarely if ever invoked on these\n tables.\n\n Environment.\n Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, \n although this is a problem across a variety of postgres\n versions.",
"msg_date": "Fri, 01 Jun 2012 09:01:24 +1000",
"msg_from": "Trevor Campbell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "On Thu, May 31, 2012 at 4:01 PM, Trevor Campbell <[email protected]>wrote:\n\n> On 01/06/12 08:55, Craig James wrote:\n>\n>\n>\n> On Thu, May 31, 2012 at 3:29 PM, Trevor Campbell <[email protected]>wrote:\n>\n>> We are having trouble with a particular query being slow in a strange\n>> manner.\n>>\n>> The query is a join over two large tables that are suitably indexed.\n>>\n>> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE,\n>> CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n>> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID =\n>> CI.GROUPID where CG.ISSUEID=? order by CG.CREATED asc, CI.ID asc\n>>\n>\n> This has an unbound variable '?' in it.\n>\n> These queries are being run from a java application using JDBC and when\n> run the variable is bound to an long integer value. While trying to\n> investigate the problem, I have been just hard coding a value in the\n> statement.\n>\n\nI use Perl, not JDBC, but this thread may be relevant to your problem.\n\nhttp://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-td3368379.html\n\nCraig\n\n>\n>\n>> For some tasks we run this particular query a very large number of times\n>> and it has a significant performance impact when it runs slowly.\n>>\n>> If we run ANALYSE over the CHANGEITEM table then the performance picks\n>> up by a factor of 5 or more. The problem is that a day later the\n>> performance will have dropped back to its previously slow state.\n>>\n>> The reason this is so hard to understand is that the activity on this\n>> table is very low, with no updates and only a relatively small number of\n>> inserts each day, < 0.1% of the table size.\n>>\n>> Explain output:\n>> Sort (cost=86.90..86.93 rows=11 width=118) (actual time=0.086..0.087\n>> rows=14 loops=1)\n>> Sort Key: cg.created, ci.id\n>> Sort Method: quicksort Memory: 26kB\n>> -> Nested Loop (cost=0.00..86.71 rows=11 width=118) (actual\n>> time=0.022..0.061 rows=14 loops=1)\n>> -> Index Scan using chggroup_issue on changegroup cg\n>> (cost=0.00..17.91 rows=8 width=33) (actual time=0.012..0.015 rows=7 loops=1)\n>> Index Cond: (issueid = 81001::numeric)\n>> -> Index Scan using chgitem_chggrp on changeitem ci\n>> (cost=0.00..8.58 rows=2 width=91) (actual time=0.005..0.005 rows=2 loops=7)\n>> Index Cond: (groupid = cg.id)\n>> Total runtime: 0.116 ms\n>>\n>\n> What's the exact SQL you used to get this ... did you use a specific\n> CG.ISSUEID to run your test? If that's the case, this EXPLAIN ANALYZE\n> won't be the same as the one generated for your actual application.\n>\n> Craig\n>\n>\n>\n>>\n>> The explain output always seems the same even when the performance is\n>> poor, but I can't be sure of that.\n>>\n>> Overall it seems like PostgreSQL just forgets about the statistics it has\n>> gathered after a short while.\n>>\n>> Schema details:\n>> CREATE TABLE changegroup\n>> (\n>> id numeric(18,0) NOT NULL,\n>> issueid numeric(18,0),\n>> author character varying(255),\n>> created timestamp with time zone,\n>> CONSTRAINT pk_changegroup PRIMARY KEY (id )\n>> )\n>> WITH (\n>> OIDS=FALSE\n>> );\n>> CREATE INDEX chggroup_issue\n>> ON changegroup\n>> USING btree\n>> (issueid );\n>>\n>> CREATE TABLE changeitem\n>> (\n>> id numeric(18,0) NOT NULL,\n>> groupid numeric(18,0),\n>> fieldtype character varying(255),\n>> field character varying(255),\n>> oldvalue text,\n>> oldstring text,\n>> newvalue text,\n>> newstring text,\n>> CONSTRAINT pk_changeitem PRIMARY KEY (id )\n>> )\n>> WITH (\n>> OIDS=FALSE\n>> );\n>>\n>> CREATE INDEX chgitem_chggrp\n>> ON changeitem\n>> USING btree\n>> (groupid );\n>>\n>> CREATE INDEX chgitem_field\n>> ON changeitem\n>> USING btree\n>> (field COLLATE pg_catalog.\"default\" );\n>>\n>> Table sizes\n>> changegroup : 2,000,000 rows\n>> changeitem : 2,500,000 rows\n>>\n>> The changegroup table has on average about 4 rows per issueid value,\n>> which is the query parameter.\n>>\n>> We run autovacuum and autoanalyse, but as the activity in the table is\n>> low these are rarely if ever invoked on these tables.\n>>\n>> Environment.\n>> Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, although this\n>> is a problem across a variety of postgres versions.\n>>\n>>\n>\n\nOn Thu, May 31, 2012 at 4:01 PM, Trevor Campbell <[email protected]> wrote:\n\n On 01/06/12 08:55, Craig James wrote:\n \n\nOn Thu, May 31, 2012 at 3:29 PM, Trevor\n Campbell <[email protected]>\n wrote:\n\n We are having trouble with a particular query being\n slow in a strange manner.\n\n The query is a join over two large tables that are\n suitably indexed.\n\n select CG.ID, CG.ISSUEID, CG.AUTHOR,\n CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD,\n CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING \n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM\n CI on CG.ID = CI.GROUPID where\n CG.ISSUEID=? order by CG.CREATED asc, CI.ID asc\n\n\n\n This has an unbound variable '?' in it.\n\n\n\n These queries are being run from a java application using JDBC and\n when run the variable is bound to an long integer value. While\n trying to investigate the problem, I have been just hard coding a\n value in the statement.I use Perl, not JDBC, but this thread may be relevant to your problem.http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-td3368379.html\nCraig \n\n\n\n\n\n \n For some tasks we run this particular query a very large\n number of times and it has a significant performance\n impact when it runs slowly.\n\n If we run ANALYSE over the CHANGEITEM table\n then the performance picks up by a factor of 5 or more. \n The problem is that a day later the performance will have\n dropped back to its previously slow state.\n\n The reason this is so hard to understand is that the\n activity on this table is very low, with no updates and\n only a relatively small number of inserts each day, <\n 0.1% of the table size.\n\n Explain output:\n Sort (cost=86.90..86.93 rows=11 width=118) (actual\n time=0.086..0.087 rows=14 loops=1)\n Sort Key: cg.created, ci.id\n Sort Method: quicksort Memory: 26kB\n -> Nested Loop (cost=0.00..86.71 rows=11 width=118)\n (actual time=0.022..0.061 rows=14 loops=1)\n -> Index Scan using chggroup_issue on\n changegroup cg (cost=0.00..17.91 rows=8 width=33) (actual\n time=0.012..0.015 rows=7 loops=1)\n Index Cond: (issueid = 81001::numeric)\n -> Index Scan using chgitem_chggrp on\n changeitem ci (cost=0.00..8.58 rows=2 width=91) (actual\n time=0.005..0.005 rows=2 loops=7)\n Index Cond: (groupid = cg.id)\n Total runtime: 0.116 ms\n\n\n\n What's the exact SQL you used to get this ... did you use a\n specific CG.ISSUEID to run your test? If that's the case,\n this EXPLAIN ANALYZE won't be the same as the one generated\n for your actual application.\n\n Craig\n\n \n\n \n The explain output always seems the same even when the\n performance is poor, but I can't be sure of that.\n\n Overall it seems like PostgreSQL just forgets about the\n statistics it has gathered after a short while.\n\n Schema details:\n CREATE TABLE changegroup\n (\n id numeric(18,0) NOT NULL,\n issueid numeric(18,0),\n author character varying(255),\n created timestamp with time zone,\n CONSTRAINT pk_changegroup PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n CREATE INDEX chggroup_issue\n ON changegroup\n USING btree\n (issueid );\n\n CREATE TABLE changeitem\n (\n id numeric(18,0) NOT NULL,\n groupid numeric(18,0),\n fieldtype character varying(255),\n field character varying(255),\n oldvalue text,\n oldstring text,\n newvalue text,\n newstring text,\n CONSTRAINT pk_changeitem PRIMARY KEY (id )\n )\n WITH (\n OIDS=FALSE\n );\n\n CREATE INDEX chgitem_chggrp\n ON changeitem\n USING btree\n (groupid );\n\n CREATE INDEX chgitem_field\n ON changeitem\n USING btree\n (field COLLATE pg_catalog.\"default\" );\n\n Table sizes\nchangegroup : 2,000,000\n\n rows\n changeitem : 2,500,000 rows\n\n The changegroup table has on average about 4 rows per\n issueid value, which is the query parameter.\n\n We run autovacuum and autoanalyse, but as the activity in\n the table is low these are rarely if ever invoked on these\n tables.\n\n Environment.\n Testing using PostgreSQL 9.1.3 on x86_64-redhat-linux-gnu, \n although this is a problem across a variety of postgres\n versions.",
"msg_date": "Thu, 31 May 2012 16:08:37 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "Thanks Craig, that certainly leads down the right path.\n\nThe following is all done in pgAdmin3:\n\nUsing an actual value we I get the plan I expect\nexplain analyze select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, \nCI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=10006 order by \nCG.CREATED asc, CI.ID asc\n\n\"Sort (cost=106.18..106.22 rows=13 width=434) (actual time=0.115..0.115 rows=12 loops=1)\"\n\" Sort Key: cg.created, ci.id\"\n\" Sort Method: quicksort Memory: 29kB\"\n\" -> Nested Loop (cost=0.00..105.94 rows=13 width=434) (actual time=0.019..0.067 rows=12 loops=1)\"\n\" -> Index Scan using chggroup_issue on changegroup cg (cost=0.00..19.73 rows=10 width=29) (actual \ntime=0.009..0.013 rows=10 loops=1)\"\n\" Index Cond: (issueid = 10006::numeric)\"\n\" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 rows=3 width=411) (actual \ntime=0.004..0.005 rows=1 loops=10)\"\n\" Index Cond: (groupid = cg.id)\"\n\"Total runtime: 0.153 ms\"\n\nUsing a prepared statement with a variable , I get a poor plan requiring a sequential scan\nprepare t2(real) as\n select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, \nCI.NEWVALUE, CI.NEWSTRING\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=$1 order by \nCG.CREATED asc, CI.ID asc;\n\n explain analyze execute t2 (10006);\n\n\"Sort (cost=126448.89..126481.10 rows=12886 width=434) (actual time=1335.615..1335.616 rows=12 loops=1)\"\n\" Sort Key: cg.created, ci.id\"\n\" Sort Method: quicksort Memory: 29kB\"\n\" -> Nested Loop (cost=0.00..125569.19 rows=12886 width=434) (actual time=0.046..1335.556 rows=12 loops=1)\"\n\" -> Seq Scan on changegroup cg (cost=0.00..44709.26 rows=10001 width=29) (actual time=0.026..1335.460 rows=10 \nloops=1)\"\n\" Filter: ((issueid)::double precision = $1)\"\n\" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.05 rows=3 width=411) (actual \ntime=0.007..0.008 rows=1 loops=10)\"\n\" Index Cond: (groupid = cg.id)\"\n\"Total runtime: 1335.669 ms\"\n\nUsing a prepared statement with a cast of the variable to the right type, I get the good plan back\nprepare t2(real) as\n select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, \nCI.NEWVALUE, CI.NEWSTRING\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=cast($1 as \nnumeric) order by CG.CREATED asc, CI.ID asc;\n\n explain analyze execute t2 (10006);\n\n\"Sort (cost=106.19..106.22 rows=13 width=434) (actual time=0.155..0.156 rows=12 loops=1)\"\n\" Sort Key: cg.created, ci.id\"\n\" Sort Method: quicksort Memory: 29kB\"\n\" -> Nested Loop (cost=0.00..105.95 rows=13 width=434) (actual time=0.048..0.111 rows=12 loops=1)\"\n\" -> Index Scan using chggroup_issue on changegroup cg (cost=0.00..19.73 rows=10 width=29) (actual \ntime=0.031..0.042 rows=10 loops=1)\"\n\" Index Cond: (issueid = ($1)::numeric)\"\n\" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 rows=3 width=411) (actual \ntime=0.006..0.006 rows=1 loops=10)\"\n\" Index Cond: (groupid = cg.id)\"\n\"Total runtime: 0.203 ms\"\n\nNow the challenge is to get java/jdbc to get this done right. We make a big effort to ensure we always use prepared \nstatements and variable bindings to help protect from SQL injection vulnerabilities.\n\n\n\nOn 01/06/12 09:08, Craig James wrote:\n> I use Perl, not JDBC, but this thread may be relevant to your problem.\n>\n> http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-td3368379.html\n>\n>\n",
"msg_date": "Fri, 01 Jun 2012 09:34:24 +1000",
"msg_from": "Trevor Campbell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "On Thu, May 31, 2012 at 4:34 PM, Trevor Campbell <[email protected]>wrote:\n\n> Thanks Craig, that certainly leads down the right path.\n>\n> The following is all done in pgAdmin3:\n>\n> Using an actual value we I get the plan I expect\n> explain analyze select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID,\n> CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID =\n> CI.GROUPID where CG.ISSUEID=10006 order by CG.CREATED asc, CI.ID asc\n>\n> \"Sort (cost=106.18..106.22 rows=13 width=434) (actual time=0.115..0.115\n> rows=12 loops=1)\"\n> \" Sort Key: cg.created, ci.id\"\n> \" Sort Method: quicksort Memory: 29kB\"\n> \" -> Nested Loop (cost=0.00..105.94 rows=13 width=434) (actual\n> time=0.019..0.067 rows=12 loops=1)\"\n> \" -> Index Scan using chggroup_issue on changegroup cg\n> (cost=0.00..19.73 rows=10 width=29) (actual time=0.009..0.013 rows=10\n> loops=1)\"\n> \" Index Cond: (issueid = 10006::numeric)\"\n> \" -> Index Scan using chgitem_chggrp on changeitem ci\n> (cost=0.00..8.58 rows=3 width=411) (actual time=0.004..0.005 rows=1\n> loops=10)\"\n> \" Index Cond: (groupid = cg.id)\"\n> \"Total runtime: 0.153 ms\"\n>\n> Using a prepared statement with a variable , I get a poor plan requiring a\n> sequential scan\n> prepare t2(real) as\n> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE,\n> CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID =\n> CI.GROUPID where CG.ISSUEID=$1 order by CG.CREATED asc, CI.ID asc;\n>\n> explain analyze execute t2 (10006);\n>\n> \"Sort (cost=126448.89..126481.10 rows=12886 width=434) (actual\n> time=1335.615..1335.616 rows=12 loops=1)\"\n> \" Sort Key: cg.created, ci.id\"\n> \" Sort Method: quicksort Memory: 29kB\"\n> \" -> Nested Loop (cost=0.00..125569.19 rows=12886 width=434) (actual\n> time=0.046..1335.556 rows=12 loops=1)\"\n> \" -> Seq Scan on changegroup cg (cost=0.00..44709.26 rows=10001\n> width=29) (actual time=0.026..1335.460 rows=10 loops=1)\"\n> \" Filter: ((issueid)::double precision = $1)\"\n> \" -> Index Scan using chgitem_chggrp on changeitem ci\n> (cost=0.00..8.05 rows=3 width=411) (actual time=0.007..0.008 rows=1\n> loops=10)\"\n> \" Index Cond: (groupid = cg.id)\"\n> \"Total runtime: 1335.669 ms\"\n>\n> Using a prepared statement with a cast of the variable to the right type,\n> I get the good plan back\n> prepare t2(real) as\n> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE,\n> CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID =\n> CI.GROUPID where CG.ISSUEID=cast($1 as numeric) order by CG.CREATED asc,\n> CI.ID asc;\n>\n> explain analyze execute t2 (10006);\n>\n> \"Sort (cost=106.19..106.22 rows=13 width=434) (actual time=0.155..0.156\n> rows=12 loops=1)\"\n> \" Sort Key: cg.created, ci.id\"\n> \" Sort Method: quicksort Memory: 29kB\"\n> \" -> Nested Loop (cost=0.00..105.95 rows=13 width=434) (actual\n> time=0.048..0.111 rows=12 loops=1)\"\n> \" -> Index Scan using chggroup_issue on changegroup cg\n> (cost=0.00..19.73 rows=10 width=29) (actual time=0.031..0.042 rows=10\n> loops=1)\"\n> \" Index Cond: (issueid = ($1)::numeric)\"\n> \" -> Index Scan using chgitem_chggrp on changeitem ci\n> (cost=0.00..8.58 rows=3 width=411) (actual time=0.006..0.006 rows=1\n> loops=10)\"\n> \" Index Cond: (groupid = cg.id)\"\n> \"Total runtime: 0.203 ms\"\n>\n> Now the challenge is to get java/jdbc to get this done right. We make a\n> big effort to ensure we always use prepared statements and variable\n> bindings to help protect from SQL injection vulnerabilities.\n>\n\nJDBC has some features that are supposed to be convenient (automatic\npreparing based on a number-of-executions threshold) that strike me as\nmisguided. It's one thing to hide irrelevant details from the app, and\nanother thing entirely to cause a huge change in the exact SQL that's sent\nto the server ... which is what JDBC seems to do.\n\nI think the trick is that if you use JDBC prepared statements, you have to\nunderstand how it's trying to be trickly and circumvent it so that you're\nalways in full control of what it's doing.\n\nCraig\n\n\n>\n>\n>\n> On 01/06/12 09:08, Craig James wrote:\n>\n>> I use Perl, not JDBC, but this thread may be relevant to your problem.\n>>\n>> http://postgresql.1045698.n5.**nabble.com/Slow-statement-**\n>> when-using-JDBC-td3368379.html<http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-td3368379.html>\n>>\n>>\n>>\n\nOn Thu, May 31, 2012 at 4:34 PM, Trevor Campbell <[email protected]> wrote:\nThanks Craig, that certainly leads down the right path.\n\nThe following is all done in pgAdmin3:\n\nUsing an actual value we I get the plan I expect\nexplain analyze select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=10006 order by CG.CREATED asc, CI.ID asc\n\n\"Sort (cost=106.18..106.22 rows=13 width=434) (actual time=0.115..0.115 rows=12 loops=1)\"\n\" Sort Key: cg.created, ci.id\"\n\" Sort Method: quicksort Memory: 29kB\"\n\" -> Nested Loop (cost=0.00..105.94 rows=13 width=434) (actual time=0.019..0.067 rows=12 loops=1)\"\n\" -> Index Scan using chggroup_issue on changegroup cg (cost=0.00..19.73 rows=10 width=29) (actual time=0.009..0.013 rows=10 loops=1)\"\n\" Index Cond: (issueid = 10006::numeric)\"\n\" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 rows=3 width=411) (actual time=0.004..0.005 rows=1 loops=10)\"\n\" Index Cond: (groupid = cg.id)\"\n\"Total runtime: 0.153 ms\"\n\nUsing a prepared statement with a variable , I get a poor plan requiring a sequential scan\nprepare t2(real) as\n select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=$1 order by CG.CREATED asc, CI.ID asc;\n\n explain analyze execute t2 (10006);\n\n\"Sort (cost=126448.89..126481.10 rows=12886 width=434) (actual time=1335.615..1335.616 rows=12 loops=1)\"\n\" Sort Key: cg.created, ci.id\"\n\" Sort Method: quicksort Memory: 29kB\"\n\" -> Nested Loop (cost=0.00..125569.19 rows=12886 width=434) (actual time=0.046..1335.556 rows=12 loops=1)\"\n\" -> Seq Scan on changegroup cg (cost=0.00..44709.26 rows=10001 width=29) (actual time=0.026..1335.460 rows=10 loops=1)\"\n\" Filter: ((issueid)::double precision = $1)\"\n\" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.05 rows=3 width=411) (actual time=0.007..0.008 rows=1 loops=10)\"\n\" Index Cond: (groupid = cg.id)\"\n\"Total runtime: 1335.669 ms\"\n\nUsing a prepared statement with a cast of the variable to the right type, I get the good plan back\nprepare t2(real) as\n select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n\n from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = CI.GROUPID where CG.ISSUEID=cast($1 as numeric) order by CG.CREATED asc, CI.ID asc;\n\n explain analyze execute t2 (10006);\n\n\"Sort (cost=106.19..106.22 rows=13 width=434) (actual time=0.155..0.156 rows=12 loops=1)\"\n\" Sort Key: cg.created, ci.id\"\n\" Sort Method: quicksort Memory: 29kB\"\n\" -> Nested Loop (cost=0.00..105.95 rows=13 width=434) (actual time=0.048..0.111 rows=12 loops=1)\"\n\" -> Index Scan using chggroup_issue on changegroup cg (cost=0.00..19.73 rows=10 width=29) (actual time=0.031..0.042 rows=10 loops=1)\"\n\" Index Cond: (issueid = ($1)::numeric)\"\n\" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 rows=3 width=411) (actual time=0.006..0.006 rows=1 loops=10)\"\n\" Index Cond: (groupid = cg.id)\"\n\"Total runtime: 0.203 ms\"\n\nNow the challenge is to get java/jdbc to get this done right. We make a big effort to ensure we always use prepared statements and variable bindings to help protect from SQL injection vulnerabilities.\nJDBC has some features that are supposed to be convenient (automatic preparing based on a number-of-executions threshold) that strike me as misguided. It's one thing to hide irrelevant details from the app, and another thing entirely to cause a huge change in the exact SQL that's sent to the server ... which is what JDBC seems to do.\nI think the trick is that if you use JDBC prepared statements, you have to understand how it's trying to be trickly and circumvent it so that you're always in full control of what it's doing.Craig\n \n\n\n\nOn 01/06/12 09:08, Craig James wrote:\n\nI use Perl, not JDBC, but this thread may be relevant to your problem.\n\nhttp://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-td3368379.html",
"msg_date": "Thu, 31 May 2012 17:08:49 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "If I am correct, JDBC uses named portal only on the 5th time you use \nPreparedStatement (configurable). Before it uses unnamed thing that \nshould work as if you did embed the value. So the solution is to \nrecreate PreparedStatement each time (so you will have no problems with \nSQL injection). Note that \"smart\" pools may detect this situation and \nreuse PreparedStatement for same query texts internally. If so, this to \nswitch this off.\nIn case you still have problems, I'd recommend you to ask in postgresql \njdbc mailing list.\nAlso I've heard that somewhere in 9.2 postgresql server may replan such \ncases each time.\n\nBest regards, Vitalii Tymchyshyn\n\n01.06.12 02:34, Trevor Campbell написав(ла):\n> Thanks Craig, that certainly leads down the right path.\n>\n> The following is all done in pgAdmin3:\n>\n> Using an actual value we I get the plan I expect\n> explain analyze select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, \n> CI.ID, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, \n> CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = \n> CI.GROUPID where CG.ISSUEID=10006 order by CG.CREATED asc, CI.ID asc\n>\n> \"Sort (cost=106.18..106.22 rows=13 width=434) (actual \n> time=0.115..0.115 rows=12 loops=1)\"\n> \" Sort Key: cg.created, ci.id\"\n> \" Sort Method: quicksort Memory: 29kB\"\n> \" -> Nested Loop (cost=0.00..105.94 rows=13 width=434) (actual \n> time=0.019..0.067 rows=12 loops=1)\"\n> \" -> Index Scan using chggroup_issue on changegroup cg \n> (cost=0.00..19.73 rows=10 width=29) (actual time=0.009..0.013 rows=10 \n> loops=1)\"\n> \" Index Cond: (issueid = 10006::numeric)\"\n> \" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 \n> rows=3 width=411) (actual time=0.004..0.005 rows=1 loops=10)\"\n> \" Index Cond: (groupid = cg.id)\"\n> \"Total runtime: 0.153 ms\"\n>\n> Using a prepared statement with a variable , I get a poor plan \n> requiring a sequential scan\n> prepare t2(real) as\n> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, \n> CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = \n> CI.GROUPID where CG.ISSUEID=$1 order by CG.CREATED asc, CI.ID asc;\n>\n> explain analyze execute t2 (10006);\n>\n> \"Sort (cost=126448.89..126481.10 rows=12886 width=434) (actual \n> time=1335.615..1335.616 rows=12 loops=1)\"\n> \" Sort Key: cg.created, ci.id\"\n> \" Sort Method: quicksort Memory: 29kB\"\n> \" -> Nested Loop (cost=0.00..125569.19 rows=12886 width=434) (actual \n> time=0.046..1335.556 rows=12 loops=1)\"\n> \" -> Seq Scan on changegroup cg (cost=0.00..44709.26 rows=10001 \n> width=29) (actual time=0.026..1335.460 rows=10 loops=1)\"\n> \" Filter: ((issueid)::double precision = $1)\"\n> \" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.05 \n> rows=3 width=411) (actual time=0.007..0.008 rows=1 loops=10)\"\n> \" Index Cond: (groupid = cg.id)\"\n> \"Total runtime: 1335.669 ms\"\n>\n> Using a prepared statement with a cast of the variable to the right \n> type, I get the good plan back\n> prepare t2(real) as\n> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE, \n> CI.FIELD, CI.OLDVALUE, CI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n> from PUBLIC.CHANGEGROUP CG inner join PUBLIC.CHANGEITEM CI on CG.ID = \n> CI.GROUPID where CG.ISSUEID=cast($1 as numeric) order by CG.CREATED \n> asc, CI.ID asc;\n>\n> explain analyze execute t2 (10006);\n>\n> \"Sort (cost=106.19..106.22 rows=13 width=434) (actual \n> time=0.155..0.156 rows=12 loops=1)\"\n> \" Sort Key: cg.created, ci.id\"\n> \" Sort Method: quicksort Memory: 29kB\"\n> \" -> Nested Loop (cost=0.00..105.95 rows=13 width=434) (actual \n> time=0.048..0.111 rows=12 loops=1)\"\n> \" -> Index Scan using chggroup_issue on changegroup cg \n> (cost=0.00..19.73 rows=10 width=29) (actual time=0.031..0.042 rows=10 \n> loops=1)\"\n> \" Index Cond: (issueid = ($1)::numeric)\"\n> \" -> Index Scan using chgitem_chggrp on changeitem ci (cost=0.00..8.58 \n> rows=3 width=411) (actual time=0.006..0.006 rows=1 loops=10)\"\n> \" Index Cond: (groupid = cg.id)\"\n> \"Total runtime: 0.203 ms\"\n>\n> Now the challenge is to get java/jdbc to get this done right. We make \n> a big effort to ensure we always use prepared statements and variable \n> bindings to help protect from SQL injection vulnerabilities.\n>\n>\n>\n> On 01/06/12 09:08, Craig James wrote:\n>> I use Perl, not JDBC, but this thread may be relevant to your problem.\n>>\n>> http://postgresql.1045698.n5.nabble.com/Slow-statement-when-using-JDBC-td3368379.html \n>>\n>>\n>>\n>\n\n",
"msg_date": "Fri, 01 Jun 2012 12:06:51 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "> If I am correct, JDBC uses named portal only on the 5th time you use\n> PreparedStatement (configurable). Before it uses unnamed thing that should\n> work as if you did embed the value.\n\nIf this is due to the difference in parameter type information, this\ndoesn't have anything to do with named portals.\n\nMy guess is that the driver has one idea about parameter types (based\non either the specific setTypeFoo call or the Java type of the\nparameter passed to setObject), and the server another (based on the\ntype information of the CHANGEGROUP.ISSUEID column). Actually, I'm\nrather surprised to see 'real' there: if you're using setObject with a\nLong, I would imagine that turns into a bigint (which I believe the\nserver knows how to coerce to numeric). Can you show us your JDBC\ncode?\n",
"msg_date": "Fri, 1 Jun 2012 08:38:10 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "Hi all,\nI got 2 complementary functions, which will do opposite things.\n1 - CombineStrings(stringarray character varying[]) RETURNS character varying\nThis one will take as parameter an array of strings and will return a string with some formatted information inside\n2- SplitString2Array(stringtosplit character varying) RETURNS character varying[] \nThis one will take as parameter a formatted string and will return an array of string\n\nThe following is true, both works just fine : \nselect SplitString2Array(CombineStrings(ARRAY['abba', 'queen']))\nwill return {'abba', 'queen'}\n\nNow, if I want do do the following:\nselect CombineStrings(ARRAY[SplitString2Array(\"SomeTextColumn\"), 'New string to add']) from \"SomeTable\"\n\ni get the following error:\narray value must start with \"{\" or dimension information\n\n\nWhat am I doing wrong, I am feeling I still don't get the array fundamentals. My goal is to add to inside formatted information in the column \"SomeTextColumn\" my new string 'New string to add' in the same manner if I would been used the following:\nInsert into \"SomeTable\"(\"SomeTextColumn\") values (CombineString(ARRAY['abba', 'queen', 'New string to add']))\n\nThank you in advance,\nDanny\n\nHi all,I got 2 complementary functions, which will do opposite things.1 - CombineStrings(stringarray character varying[]) RETURNS character varyingThis one will take as parameter an array of strings and will return a string with some formatted information inside2- SplitString2Array(stringtosplit character varying) RETURNS character varying[] This one will take as parameter a formatted string and will return an array of stringThe following is true, both works just fine : select\n SplitString2Array(CombineStrings(ARRAY['abba', 'queen']))will return {'abba', 'queen'}Now, if I want do do the following:select CombineStrings(ARRAY[SplitString2Array(\"SomeTextColumn\"), 'New string to add']) from \"SomeTable\"i get the following error:array value must start with \"{\" or dimension informationWhat am I doing wrong, I am feeling I still don't get the array fundamentals. My goal is to add to inside formatted information in the column \"SomeTextColumn\" my new string 'New string to add' in the same manner if I would been used the following:Insert into \"SomeTable\"(\"SomeTextColumn\") values (CombineString(ARRAY['abba', 'queen', 'New string to add']))Thank you in advance,Danny",
"msg_date": "Sat, 2 Jun 2012 10:05:30 -0700 (PDT)",
"msg_from": "idc danny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Array fundamentals"
},
{
"msg_contents": "On Sat, 2012-06-02 at 10:05 -0700, idc danny wrote:\n> Now, if I want do do the following:\n> select CombineStrings(ARRAY[SplitString2Array(\"SomeTextColumn\"), 'New\n> string to add']) from \"SomeTable\"\n> \n> i get the following error:\n> array value must start with \"{\" or dimension information\n\nThis discussion is better suited to another list, like -general, so I'm\nmoving it there.\n\nIn the fragment:\n ARRAY[SplitString2Array(\"SomeTextColumn\"), 'New string to add']\nThe first array element is itself an array of strings, but the second is\na plain string. Array elements must all be the same type.\n\nWhat you want to do is replace that fragment with something more like:\n array_append(SplitString2Array(\"SomeTextColumn\"), 'New string to add')\n\nIf that still doesn't work, we'll need to see the exact definitions of\nyour functions.\n\nAlso, as a debugging strategy, I recommend that you look at the pieces\nthat do work, and slowly build up the fragments until it doesn't work.\nThat will allow you to see the inputs to each function, and it makes it\neasier to see why it doesn't work.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Sat, 02 Jun 2012 10:21:35 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Array fundamentals"
},
{
"msg_contents": "Thanks for all your help so far. I have been away for a couple of days so my apologies for not replying earlier.\n\nWe are using a third party library to run our SQL via JDBC (not one of the common ones like Hibernate etc), but I have \nbeen able to dig out the exact statements run in the scenario we are experiencing.\n\nWe always run a prepare and then execute a query as follows, we never reuse the prepared statement:\n\n_ps = _connection.prepareStatement(sql, resultSetType, resultSetConcurrency);\n\n// The values here are:\n// sql = SELECT CG.ID, CG.issueid, CG.AUTHOR, CG.CREATED, CI.ID, CI.groupid, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, \nCI.OLDSTRING, CI.NEWVALUE, CI.NEWSTRING\n// FROM public.changegroup CG INNER JOIN public.changeitem CI ON CG.ID = CI.groupid\n// WHERE CG.issueid=? ORDER BY CG.CREATED ASC, CI.ID ASC\n// resultSetType = ResultSet.TYPE_FORWARD_ONLY\n// resultSetConcurrency = ResultSet.CONCUR_READ_ONLY\n\n_ps.setLong(1, field);\n\n_rs = _ps.executeQuery();\n\nOn 02/06/12 01:38, Maciek Sakrejda wrote:\n>> If I am correct, JDBC uses named portal only on the 5th time you use\n>> PreparedStatement (configurable). Before it uses unnamed thing that should\n>> work as if you did embed the value.\n> If this is due to the difference in parameter type information, this\n> doesn't have anything to do with named portals.\n>\n> My guess is that the driver has one idea about parameter types (based\n> on either the specific setTypeFoo call or the Java type of the\n> parameter passed to setObject), and the server another (based on the\n> type information of the CHANGEGROUP.ISSUEID column). Actually, I'm\n> rather surprised to see 'real' there: if you're using setObject with a\n> Long, I would imagine that turns into a bigint (which I believe the\n> server knows how to coerce to numeric). Can you show us your JDBC\n> code?\n\n\n\n\n\n\nThanks for all your help so far. I have\n been away for a couple of days so my apologies for not replying\n earlier.\n\n We are using a third party library to run our SQL via JDBC (not one\n of the common ones like Hibernate etc), but I have been able to dig\n out the exact statements run in the scenario we are experiencing.\n\n We always run a prepare and then execute a query as follows, we\n never reuse the prepared statement:\n\n _ps = _connection.prepareStatement(sql, resultSetType,\n resultSetConcurrency); \n\n // The values here are:\n // sql = SELECT CG.ID, CG.issueid, CG.AUTHOR, CG.CREATED, CI.ID,\n CI.groupid, CI.FIELDTYPE, CI.FIELD, CI.OLDVALUE, CI.OLDSTRING,\n CI.NEWVALUE, CI.NEWSTRING \n // FROM public.changegroup CG INNER JOIN\n public.changeitem CI ON CG.ID = CI.groupid \n // WHERE CG.issueid=? ORDER BY CG.CREATED\n ASC, CI.ID ASC\n // resultSetType = ResultSet.TYPE_FORWARD_ONLY \n // resultSetConcurrency = ResultSet.CONCUR_READ_ONLY\n\n _ps.setLong(1, field); \n\n _rs = _ps.executeQuery();\n\n On 02/06/12 01:38, Maciek Sakrejda wrote:\n \n\nIf I am correct, JDBC uses named portal only on the 5th time you use\nPreparedStatement (configurable). Before it uses unnamed thing that should\nwork as if you did embed the value.\n\n\n\nIf this is due to the difference in parameter type information, this\ndoesn't have anything to do with named portals.\n\nMy guess is that the driver has one idea about parameter types (based\non either the specific setTypeFoo call or the Java type of the\nparameter passed to setObject), and the server another (based on the\ntype information of the CHANGEGROUP.ISSUEID column). Actually, I'm\nrather surprised to see 'real' there: if you're using setObject with a\nLong, I would imagine that turns into a bigint (which I believe the\nserver knows how to coerce to numeric). Can you show us your JDBC\ncode?",
"msg_date": "Tue, 05 Jun 2012 11:25:56 +1000",
"msg_from": "Trevor Campbell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "> Actually, I'm rather surprised to see 'real' there: if you're using setObject with a Long, I would imagine that turns \n> into a bigint (which I believe the server knows how to coerce to numeric).\nThe (real) is just my fault in testing. I just copy/pasted from elsewhere and it is not what is coming from the JDBC \ndriver, I don't know how to see what the driver is actually sending.\n",
"msg_date": "Tue, 05 Jun 2012 13:49:24 +1000",
"msg_from": "Trevor Campbell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trouble with plan statistics for behaviour for query."
},
{
"msg_contents": "On Sat, Jun 2, 2012 at 1:05 PM, idc danny <[email protected]> wrote:\n> Hi all,\n> I got 2 complementary functions, which will do opposite things.\n> 1 - CombineStrings(stringarray character varying[]) RETURNS character\n> varying\n> This one will take as parameter an array of strings and will return a string\n> with some formatted information inside\n> 2- SplitString2Array(stringtosplit character varying) RETURNS character\n> varying[]\n> This one will take as parameter a formatted string and will return an array\n> of string\n>\n> The following is true, both works just fine :\n> select SplitString2Array(CombineStrings(ARRAY['abba', 'queen']))\n> will return {'abba', 'queen'}\n>\n> Now, if I want do do the following:\n> select CombineStrings(ARRAY[SplitString2Array(\"SomeTextColumn\"), 'New string\n> to add']) from \"SomeTable\"\n> i get the following error:\n> array value must start with \"{\" or dimension information\n>\n> What am I doing wrong, I am feeling I still don't get the array\n> fundamentals. My goal is to add to inside formatted information in the\n> column \"SomeTextColumn\" my new string 'New string to add' in the same manner\n> if I would been used the following:\n> Insert into \"SomeTable\"(\"SomeTextColumn\") values\n> (CombineString(ARRAY['abba', 'queen', 'New string to add']))\n\nIt sounds like one or both of your functions have a bug in them, but\nwithout knowing what they're supposed to do or seeing the source code,\nit's pretty hard to guess what it might be.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 18 Jul 2012 15:32:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array fundamentals"
}
] |
[
{
"msg_contents": "While investigating some performance issues I have been looking at slow\nqueries logged to the postgresql.log file. A strange thing that I have\nseen is a series of apparently very slow queries that just select from a\nsequence. It is as if access to a sequence is blocked for many sessions and\nthen released as I get log entries like this appearing:\n\nLOG: duration: 23702.553 ms execute <unnamed>: /* dynamic native SQL\nquery */ select nextval ('my_sequence') as nextval\nLOG: duration: 23673.068 ms execute <unnamed>: /* dynamic native SQL\nquery */ select nextval ('my_sequence') as nextval\nLOG: duration: 23632.729 ms execute <unnamed>: /* dynamic native SQL\nquery */ select nextval ('my_sequence') as nextval\n....(Many similar lines)....\nLOG: duration: 3055.057 ms execute <unnamed>: /* dynamic native SQL query\n*/ select nextval ('my_sequence') as nextval\nLOG: duration: 2377.621 ms execute <unnamed>: /* dynamic native SQL query\n*/ select nextval ('my_sequence') as nextval\nLOG: duration: 743.732 ms execute <unnamed>: /* dynamic native SQL query\n*/ select nextval ('my_sequence') as nextval\n\nThe code is being executed via Hibernate, but using\nSession.createSQLQuery(), so the SQL above appears in the source as above\n(minus the comment) and not as part of any ORM magic. We are using\nPostgresql 9.0.\n\nThis seems very strange to me. What could cause a sequence to be locked for\nsuch a long time?\nThe sequence in question has cache set at 1. Would setting this higher make\nany difference?\n\nThanks\n\nChris\n\nWhile investigating some performance issues I have been looking at slow queries logged to the postgresql.log file. A strange thing that I have seen is a series of apparently very slow queries that just select from a sequence. It is as if access to a sequence is blocked for many sessions and then released as I get log entries like this appearing:\nLOG: duration: 23702.553 ms execute <unnamed>: /* dynamic native SQL query */ select nextval ('my_sequence') as nextvalLOG: duration: 23673.068 ms execute <unnamed>: /* dynamic native SQL query */ select nextval ('my_sequence') as nextval\nLOG: duration: 23632.729 ms execute <unnamed>: /* dynamic native SQL query */ select nextval ('my_sequence') as nextval....(Many similar lines)....LOG: duration: 3055.057 ms execute <unnamed>: /* dynamic native SQL query */ select nextval ('my_sequence') as nextval\nLOG: duration: 2377.621 ms execute <unnamed>: /* dynamic native SQL query */ select nextval ('my_sequence') as nextvalLOG: duration: 743.732 ms execute <unnamed>: /* dynamic native SQL query */ select nextval ('my_sequence') as nextval\nThe code is being executed via Hibernate, but using Session.createSQLQuery(), so the SQL above appears in the source as above (minus the comment) and not as part of any ORM magic. We are using Postgresql 9.0.\nThis seems very strange to me. What could cause a sequence to be locked for such a long time?The sequence in question has cache set at 1. Would setting this higher make any difference? \nThanksChris",
"msg_date": "Fri, 1 Jun 2012 14:13:21 +0100",
"msg_from": "Chris Rimmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select from sequence in slow query log"
},
{
"msg_contents": "Chris Rimmer <[email protected]> writes:\n> While investigating some performance issues I have been looking at slow\n> queries logged to the postgresql.log file. A strange thing that I have\n> seen is a series of apparently very slow queries that just select from a\n> sequence. It is as if access to a sequence is blocked for many sessions and\n> then released as I get log entries like this appearing:\n\n> LOG: duration: 23702.553 ms execute <unnamed>: /* dynamic native SQL\n> query */ select nextval ('my_sequence') as nextval\n> LOG: duration: 23673.068 ms execute <unnamed>: /* dynamic native SQL\n> query */ select nextval ('my_sequence') as nextval\n> LOG: duration: 23632.729 ms execute <unnamed>: /* dynamic native SQL\n> query */ select nextval ('my_sequence') as nextval\n> ....(Many similar lines)....\n\nThat's pretty weird. What else is being done to that sequence? Is it\nonly the sequence ops that are slow, or does this happen at times when\neverything else is slow too? Can you create a reproducible test case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jun 2012 09:47:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select from sequence in slow query log "
},
{
"msg_contents": "It looks like this effect only occurs in the middle of the night when there\nis some kind of automated dump process going on and the system is under\nhigher than normal load. I haven't managed to reproduce them outside of\nproduction, but since these oddities don't seem to show up during normal\noperations, I'm not worrying too much about them now.\n\nThanks\n\nChris\n\nOn 1 June 2012 14:47, Tom Lane <[email protected]> wrote:\n\n> Chris Rimmer <[email protected]> writes:\n> > While investigating some performance issues I have been looking at slow\n> > queries logged to the postgresql.log file. A strange thing that I have\n> > seen is a series of apparently very slow queries that just select from a\n> > sequence. It is as if access to a sequence is blocked for many sessions\n> and\n> > then released as I get log entries like this appearing:\n>\n> > LOG: duration: 23702.553 ms execute <unnamed>: /* dynamic native SQL\n> > query */ select nextval ('my_sequence') as nextval\n> > LOG: duration: 23673.068 ms execute <unnamed>: /* dynamic native SQL\n> > query */ select nextval ('my_sequence') as nextval\n> > LOG: duration: 23632.729 ms execute <unnamed>: /* dynamic native SQL\n> > query */ select nextval ('my_sequence') as nextval\n> > ....(Many similar lines)....\n>\n> That's pretty weird. What else is being done to that sequence? Is it\n> only the sequence ops that are slow, or does this happen at times when\n> everything else is slow too? Can you create a reproducible test case?\n>\n> regards, tom lane\n>\n>\n\nIt looks like this effect only occurs in the middle of the night when there is some kind of automated dump process going on and the system is under higher than normal load. I haven't managed to reproduce them outside of production, but since these oddities don't seem to show up during normal operations, I'm not worrying too much about them now.\nThanksChris\nOn 1 June 2012 14:47, Tom Lane <[email protected]> wrote:\nChris Rimmer <[email protected]> writes:\n> While investigating some performance issues I have been looking at slow\n> queries logged to the postgresql.log file. A strange thing that I have\n> seen is a series of apparently very slow queries that just select from a\n> sequence. It is as if access to a sequence is blocked for many sessions and\n> then released as I get log entries like this appearing:\n\n> LOG: duration: 23702.553 ms execute <unnamed>: /* dynamic native SQL\n> query */ select nextval ('my_sequence') as nextval\n> LOG: duration: 23673.068 ms execute <unnamed>: /* dynamic native SQL\n> query */ select nextval ('my_sequence') as nextval\n> LOG: duration: 23632.729 ms execute <unnamed>: /* dynamic native SQL\n> query */ select nextval ('my_sequence') as nextval\n> ....(Many similar lines)....\n\nThat's pretty weird. What else is being done to that sequence? Is it\nonly the sequence ops that are slow, or does this happen at times when\neverything else is slow too? Can you create a reproducible test case?\n\n regards, tom lane",
"msg_date": "Fri, 1 Jun 2012 16:41:53 +0100",
"msg_from": "Chris Rimmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Select from sequence in slow query log"
}
] |
[
{
"msg_contents": "Claudio Freire wrote:\n> Stephen Frost wrote:\n>> Rajesh Kumar. Mallah ([email protected]) wrote:\n \n>>> we are actually also running out db max connections (also)\n>>> ( which is currently at 600) , when that happens something at\n>>> the beginning of the application stack also gets dysfunctional\n>>> and it changes the very input to the system. ( think of negative\n>>> feedback systems )\n>>\n>> Oh. Yeah, have you considered pgbouncer?\n> \n> Or pooling at the application level. Many ORMs support connection\n> pooling and limiting out-of-the-box.\n> \n> In essence, postgres should never bounce connections, it should all\n> be handled by the application or a previous pgbouncer, both of\n> which would do it more efficient and effectively.\n \nStephen and Claudio have, I think, pointed you in the right\ndirection. For more detail on why, see this Wiki page:\n \nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n \n-Kevin\n",
"msg_date": "Fri, 01 Jun 2012 21:54:00 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High load average in 64-core server , no I/O wait and CPU is idle"
}
] |
[
{
"msg_contents": "Hi everyone, \n\nI am trying to run the following query:\n\nSELECT count(1) --DISTINCT l_userqueue.queueid\n FROM e_usersessions\n JOIN l_userqueue\n ON l_userqueue.userid = e_usersessions.entityid\n JOIN a_activity\n ON a_activity.activequeueid = l_userqueue.queueid\n AND a_activity.vstatus = 1\n AND a_activity.ventrydate > 0\n AND a_activity.sbuid = e_usersessions.sbuid \n AND a_activity.assignedtoid = 0\n AND a_activity.status <> '0' \n WHERE e_usersessions.sessionkeepalivedatetime > 20120605082131943\n\nExplain analyze:\n'Aggregate (cost=100402.10..100402.11 rows=1 width=0) (actual time=2249.051..2249.051 rows=1 loops=1)'\n' -> Hash Join (cost=10.93..99795.09 rows=242803 width=0) (actual time=0.541..2249.027 rows=33 loops=1)'\n' Hash Cond: ((a_activity.activequeueid = l_userqueue.queueid) AND (a_activity.sbuid = e_usersessions.sbuid))'\n' -> Seq Scan on a_activity (cost=0.00..88462.52 rows=1208167 width=22) (actual time=0.010..1662.142 rows=1207855 loops=1)'\n' Filter: ((ventrydate > 0) AND ((status)::text <> '0'::text) AND (vstatus = 1) AND (assignedtoid = 0::numeric))'\n' -> Hash (cost=10.86..10.86 rows=5 width=22) (actual time=0.053..0.053 rows=4 loops=1)'\n' -> Hash Join (cost=9.38..10.86 rows=5 width=22) (actual time=0.033..0.048 rows=4 loops=1)'\n' Hash Cond: (l_userqueue.userid = e_usersessions.entityid)'\n' -> Seq Scan on l_userqueue (cost=0.00..1.23 rows=23 width=27) (actual time=0.003..0.009 rows=23 loops=1)'\n' -> Hash (cost=9.31..9.31 rows=5 width=21) (actual time=0.018..0.018 rows=2 loops=1)'\n' -> Index Scan using i06_e_usersessions on e_usersessions (cost=0.00..9.31 rows=5 width=21) (actual time=0.009..0.012 rows=2 loops=1)'\n' Index Cond: (sessionkeepalivedatetime > 20120605082131943::bigint)'\n'Total runtime: 2249.146 ms'\n\nI am trying to understand the reason why the a sequencial scan is used on a_activity instead of using the index by activequeueid (i08_a_activity). If I run the this other query, I get a complete different results:\n\nSELECT * \n FROM a_activity\n WHERE a_activity.activequeueid = 123456\n AND a_activity.vstatus = 1\n AND a_activity.ventrydate > 0\n\nExplain analyze:\n'Index Scan using i08_a_activity on a_activity (cost=0.00..303.57 rows=162 width=7287) (actual time=0.019..0.019 rows=0 loops=1)'\n' Index Cond: ((activequeueid = 123456::numeric) AND (vstatus = 1) AND (ventrydate > 0))'\n'Total runtime: 0.076 ms'\n\n\nThis is the definition of the index :\n\nCREATE INDEX i08_a_activity\n ON a_activity\n USING btree\n (activequeueid , vstatus , ventrydate );\n\n\n a_activity table has 1,216,134 rows\n\n\n\nThanks in advance,\nAndrew\n\n\n \t\t \t \t\t \n\n\n\n\nHi everyone, I am trying to run the following query:SELECT count(1) --DISTINCT l_userqueue.queueid FROM e_usersessions JOIN l_userqueue ON l_userqueue.userid = e_usersessions.entityid JOIN a_activity ON a_activity.activequeueid = l_userqueue.queueid AND a_activity.vstatus = 1 AND a_activity.ventrydate > 0 AND a_activity.sbuid = e_usersessions.sbuid AND a_activity.assignedtoid = 0 AND a_activity.status <> '0' WHERE e_usersessions.sessionkeepalivedatetime > 20120605082131943Explain analyze:'Aggregate (cost=100402.10..100402.11 rows=1 width=0) (actual time=2249.051..2249.051 rows=1 loops=1)'' -> Hash Join (cost=10.93..99795.09 rows=242803 width=0) (actual time=0.541..2249.027 rows=33 loops=1)'' Hash Cond: ((a_activity.activequeueid = l_userqueue.queueid) AND (a_activity.sbuid = e_usersessions.sbuid))'' -> Seq Scan on a_activity (cost=0.00..88462.52 rows=1208167 width=22) (actual time=0.010..1662.142 rows=1207855 loops=1)'' Filter: ((ventrydate > 0) AND ((status)::text <> '0'::text) AND (vstatus = 1) AND (assignedtoid = 0::numeric))'' -> Hash (cost=10.86..10.86 rows=5 width=22) (actual time=0.053..0.053 rows=4 loops=1)'' -> Hash Join (cost=9.38..10.86 rows=5 width=22) (actual time=0.033..0.048 rows=4 loops=1)'' Hash Cond: (l_userqueue.userid = e_usersessions.entityid)'' -> Seq Scan on l_userqueue (cost=0.00..1.23 rows=23 width=27) (actual time=0.003..0.009 rows=23 loops=1)'' -> Hash (cost=9.31..9.31 rows=5 width=21) (actual time=0.018..0.018 rows=2 loops=1)'' -> Index Scan using i06_e_usersessions on e_usersessions (cost=0.00..9.31 rows=5 width=21) (actual time=0.009..0.012 rows=2 loops=1)'' Index Cond: (sessionkeepalivedatetime > 20120605082131943::bigint)''Total runtime: 2249.146 ms'I am trying to understand the reason why the a sequencial scan is used on a_activity instead of using the index by activequeueid (i08_a_activity). If I run the this other query, I get a complete different results:SELECT * FROM a_activity WHERE a_activity.activequeueid = 123456 AND a_activity.vstatus = 1 AND a_activity.ventrydate > 0Explain analyze:'Index Scan using i08_a_activity on a_activity (cost=0.00..303.57 rows=162 width=7287) (actual time=0.019..0.019 rows=0 loops=1)'' Index Cond: ((activequeueid = 123456::numeric) AND (vstatus = 1) AND (ventrydate > 0))''Total runtime: 0.076 ms'This is the definition of the index :CREATE INDEX i08_a_activity ON a_activity USING btree (activequeueid , vstatus , ventrydate ); a_activity table has 1,216,134 rowsThanks in advance,Andrew",
"msg_date": "Tue, 5 Jun 2012 08:48:41 -0400",
"msg_from": "Andrew Jaimes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequencial scan in a JOIN"
},
{
"msg_contents": "On 06/05/2012 07:48 AM, Andrew Jaimes wrote:\n\n> ' -> Hash Join (cost=10.93..99795.09 rows=242803 width=0) (actual\n> time=0.541..2249.027 rows=33 loops=1)'\n> ' Hash Cond: ((a_activity.activequeueid = l_userqueue.queueid)\n> AND (a_activity.sbuid = e_usersessions.sbuid))'\n> ' -> Seq Scan on a_activity (cost=0.00..88462.52 rows=1208167\n> width=22) (actual time=0.010..1662.142\n\nI'd be willing to bet your stats are way, way off. It expected 242,803 \nrows in the hash, but only got 33. In that kind of scenario, I could \neasily see the planner choosing a sequence scan over an index scan, as \ndoing that many index seeks would be much more expensive than scanning \nthe table.\n\nWhat's your default_statistics_target, and when is the last time you \nanalyzed the tables in this query?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Tue, 5 Jun 2012 08:15:45 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan in a JOIN"
},
{
"msg_contents": "the default_statistics_target is set to 200, and I have run the analyze and reindex on these tables before writing the email. \n\n\n\n\n\nAndrew\n\n> Date: Tue, 5 Jun 2012 08:15:45 -0500\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] Sequencial scan in a JOIN\n> \n> On 06/05/2012 07:48 AM, Andrew Jaimes wrote:\n> \n> > ' -> Hash Join (cost=10.93..99795.09 rows=242803 width=0) (actual\n> > time=0.541..2249.027 rows=33 loops=1)'\n> > ' Hash Cond: ((a_activity.activequeueid = l_userqueue.queueid)\n> > AND (a_activity.sbuid = e_usersessions.sbuid))'\n> > ' -> Seq Scan on a_activity (cost=0.00..88462.52 rows=1208167\n> > width=22) (actual time=0.010..1662.142\n> \n> I'd be willing to bet your stats are way, way off. It expected 242,803 \n> rows in the hash, but only got 33. In that kind of scenario, I could \n> easily see the planner choosing a sequence scan over an index scan, as \n> doing that many index seeks would be much more expensive than scanning \n> the table.\n> \n> What's your default_statistics_target, and when is the last time you \n> analyzed the tables in this query?\n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\nthe default_statistics_target is set to 200, and I have run the analyze and reindex on these tables before writing the email. \n\n\nAndrew> Date: Tue, 5 Jun 2012 08:15:45 -0500> From: [email protected]> To: [email protected]> CC: [email protected]> Subject: Re: [PERFORM] Sequencial scan in a JOIN> > On 06/05/2012 07:48 AM, Andrew Jaimes wrote:> > > ' -> Hash Join (cost=10.93..99795.09 rows=242803 width=0) (actual> > time=0.541..2249.027 rows=33 loops=1)'> > ' Hash Cond: ((a_activity.activequeueid = l_userqueue.queueid)> > AND (a_activity.sbuid = e_usersessions.sbuid))'> > ' -> Seq Scan on a_activity (cost=0.00..88462.52 rows=1208167> > width=22) (actual time=0.010..1662.142> > I'd be willing to bet your stats are way, way off. It expected 242,803 > rows in the hash, but only got 33. In that kind of scenario, I could > easily see the planner choosing a sequence scan over an index scan, as > doing that many index seeks would be much more expensive than scanning > the table.> > What's your default_statistics_target, and when is the last time you > analyzed the tables in this query?> > -- > Shaun Thomas> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604> 312-444-8534> [email protected]> > ______________________________________________> > See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 5 Jun 2012 09:31:19 -0400",
"msg_from": "Andrew Jaimes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequencial scan in a JOIN"
},
{
"msg_contents": "On 06/05/2012 08:31 AM, Andrew Jaimes wrote:\n\n> the default_statistics_target is set to 200, and I have run the analyze\n> and reindex on these tables before writing the email.\n\nOut of idle curiosity, how do these two variants treat you?\n\nSELECT count(1)\n FROM e_usersessions s\n JOIN l_userqueue q ON (q.userid = s.entityid)\n JOIN a_activity a ON (a.activequeueid = q.queueid)\n WHERE s.sessionkeepalivedatetime > 20120605082131943\n AND a.vstatus = 1\n AND a.ventrydate > 0\n AND a.sbuid = s.sbuid\n AND a.assignedtoid = 0\n AND a.status <> '0'\n\nSELECT count(1)\n FROM e_usersessions s\n JOIN l_userqueue q ON (q.userid = s.entityid)\n WHERE s.sessionkeepalivedatetime > 20120605082131943\n AND EXISTS (\n SELECT 1 FROM a_activity a\n WHERE a.activequeueid = q.queueid\n AND a.sbuid = s.sbuid\n AND a.vstatus = 1\n AND a.ventrydate > 0\n AND a.assignedtoid = 0\n AND a.status <> '0'\n )\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Tue, 5 Jun 2012 09:02:08 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan in a JOIN"
},
{
"msg_contents": "On 06/05/2012 09:41 AM, Andrew Jaimes wrote:\n\n> The second query ran better than the first one:\n\nThat's what I figured. Ok, so looking back to your original message again:\n\nCREATE INDEX i08_a_activity\n ON a_activity\n USING btree\n (activequeueid , vstatus , ventrydate );\n\nBased on the query here, it doesn't appear that vstatus or ventrydate \nare doing you any good in that index. Nor would your query even really \nmake use of them anyway, considering their catch-all equalities. If you \ncan make a clone of a_activity, could you try this index instead with \nyour original query:\n\nCREATE INDEX idx_a_activity_queue\n ON a_activity_clone (activequeueid);\n\nThen compare to this:\n\nCREATE INDEX idx_a_activity_queue_sbuid\n ON a_activity_clone (activequeueid, sbuid);\n\nAnd the results of this query would also be handy:\n\nSELECT attname, n_distinct\n FROM pg_stats\n WHERE tablename='a_activity';\n\nGenerally you want to order your composite indexes in order of \nuniqueness, if you even want to make a composite index in the first \nplace. I noticed in both cases, it's basically ignoring sbuid aside from \nthe implied hash to exclude non-matches.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Tue, 5 Jun 2012 09:57:14 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan in a JOIN"
},
{
"msg_contents": "On Tue, Jun 5, 2012 at 8:48 AM, Andrew Jaimes <[email protected]> wrote:\n> Hi everyone,\n>\n> I am trying to run the following query:\n>\n> SELECT count(1) --DISTINCT l_userqueue.queueid\n> FROM e_usersessions\n> JOIN l_userqueue\n> ON l_userqueue.userid = e_usersessions.entityid\n> JOIN a_activity\n> ON a_activity.activequeueid = l_userqueue.queueid\n> AND a_activity.vstatus = 1\n> AND a_activity.ventrydate > 0\n> AND a_activity.sbuid = e_usersessions.sbuid\n> AND a_activity.assignedtoid = 0\n> AND a_activity.status <> '0'\n> WHERE e_usersessions.sessionkeepalivedatetime > 20120605082131943\n>\n> Explain analyze:\n> 'Aggregate (cost=100402.10..100402.11 rows=1 width=0) (actual\n> time=2249.051..2249.051 rows=1 loops=1)'\n> ' -> Hash Join (cost=10.93..99795.09 rows=242803 width=0) (actual\n> time=0.541..2249.027 rows=33 loops=1)'\n> ' Hash Cond: ((a_activity.activequeueid = l_userqueue.queueid) AND\n> (a_activity.sbuid = e_usersessions.sbuid))'\n> ' -> Seq Scan on a_activity (cost=0.00..88462.52 rows=1208167\n> width=22) (actual time=0.010..1662.142 rows=1207855 loops=1)'\n> ' Filter: ((ventrydate > 0) AND ((status)::text <> '0'::text)\n> AND (vstatus = 1) AND (assignedtoid = 0::numeric))'\n> ' -> Hash (cost=10.86..10.86 rows=5 width=22) (actual\n> time=0.053..0.053 rows=4 loops=1)'\n> ' -> Hash Join (cost=9.38..10.86 rows=5 width=22) (actual\n> time=0.033..0.048 rows=4 loops=1)'\n> ' Hash Cond: (l_userqueue.userid =\n> e_usersessions.entityid)'\n> ' -> Seq Scan on l_userqueue (cost=0.00..1.23 rows=23\n> width=27) (actual time=0.003..0.009 rows=23 loops=1)'\n> ' -> Hash (cost=9.31..9.31 rows=5 width=21) (actual\n> time=0.018..0.018 rows=2 loops=1)'\n> ' -> Index Scan using i06_e_usersessions on\n> e_usersessions (cost=0.00..9.31 rows=5 width=21) (actual time=0.009..0.012\n> rows=2 loops=1)'\n> ' Index Cond: (sessionkeepalivedatetime >\n> 20120605082131943::bigint)'\n> 'Total runtime: 2249.146 ms'\n>\n> I am trying to understand the reason why the a sequencial scan is used on\n> a_activity instead of using the index by activequeueid (i08_a_activity).\n\nI'm chiming in a bit late here, but it seems like you're hoping that\nthe query plan will form the outer join as a nested loop, with the\ninner and outer sides swapped, so that the results of the join between\nl_userqueue and e_usersessions are used to drive a series of index\nscans on a_activity that avoid scanning the whole table. PostgreSQL\n9.2 will be the first release that has the ability to generate that\nkind of plan, so it would be interesting to see what happens if you\ntry this on 9.2beta.\n\nOlder releases should be able consider a nested loop join with\nl_userqueue as the inner rel, driving an index scan over a_activity,\nand then performing the join to e_usersessions afterwards. But that\nplan might not be nearly as good, since then we'd have to do 23\nindex-scans on a_activity rather than just 4.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 18 Jul 2012 16:00:53 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan in a JOIN"
}
] |
[
{
"msg_contents": "Hi,\n\nI want to make a function in C for postgresql, this is the code:\n\n#define _USE_32BIT_TIME_T\n#define BUILDING_DLL 1 \n\n#include \"postgres.h\"\n#include \"fmgr.h\"\n#include \"executor\\spi.h\" /* SPI - Server Programming Interface */\n\n#if defined(_MSC_VER) || defined(__MINGW32__)\n#define PG_GETINF_EXPORT __declspec (dllexport)\n#else\n#define PG_GETINF_EXPORT\n#endif\n\nPG_MODULE_MAGIC;\n\nPG_GETINF_EXPORT PG_FUNCTION_INFO_V1(suma);\n\nDatum suma(PG_FUNCTION_ARGS)\n{\nint32 arg = PG_GETARG_INT32(0);\n \nPG_RETURN_INT32(arg + 1);\n};\n\n\nThis compile sucessfull, but when I try to use:\n\nCREATE OR REPLACE FUNCTION add_one(integer)\nRETURNS integer AS\n'C:\\Documents and Settings\\Administrador\\Escritorio\\test\\test.dll', 'pg_finfo_suma'\nLANGUAGE 'c' VOLATILE STRICT\nCOST 1;\n\nI get it:\n\nERROR: biblioteca «C:\\Documents and Settings\\Administrador\\Escritorio\\test\\test.dll» incompatible: no se encuentra el bloque mágico\nHINT: Se requiere que las bibliotecas de extensión usen la macro PG_MODULE_MAGIC.\n\nPlease help me! I don't know to do.\n\nThanks\n\nHi,I want to make a function in C for postgresql, this is the code:#define _USE_32BIT_TIME_T#define BUILDING_DLL 1 #include \"postgres.h\"#include \"fmgr.h\"#include \"executor\\spi.h\" /* SPI - Server Programming Interface */#if defined(_MSC_VER) || defined(__MINGW32__)#define PG_GETINF_EXPORT __declspec (dllexport)#else#define PG_GETINF_EXPORT#endifPG_MODULE_MAGIC;PG_GETINF_EXPORT PG_FUNCTION_INFO_V1(suma);Datum suma(PG_FUNCTION_ARGS){int32 arg = PG_GETARG_INT32(0); PG_RETURN_INT32(arg + 1);};This compile sucessfull, but when I try to use:CREATE OR REPLACE FUNCTION add_one(integer)RETURNS integer AS'C:\\Documents and\n Settings\\Administrador\\Escritorio\\test\\test.dll', 'pg_finfo_suma'LANGUAGE 'c' VOLATILE STRICTCOST 1;I get it:ERROR: biblioteca «C:\\Documents and Settings\\Administrador\\Escritorio\\test\\test.dll» incompatible: no se encuentra el bloque mágicoHINT: Se requiere que las bibliotecas de extensión usen la macro PG_MODULE_MAGIC.Please help me! I don't know to do.Thanks",
"msg_date": "Tue, 5 Jun 2012 17:12:34 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing block Magic"
},
{
"msg_contents": "Alejandro Carrillo <[email protected]> writes:\n> ERROR:� biblioteca �C:\\Documents and Settings\\Administrador\\Escritorio\\test\\test.dll� incompatible: no se encuentra el bloque m�gico\n> HINT:� Se requiere que las bibliotecas de extensi�n usen la macro PG_MODULE_MAGIC.\n\n[ scratches head ... ] Your source code looks fine. Are you sure you\nare pointing to the right copy of the .dll file?\n\nIf you added the PG_MODULE_MAGIC; line after getting this error, it's\nlikely that the old version of the .dll is already loaded into the\nserver's memory, in which case what you need to do to get rid of it\nis to restart the server, or at least start a fresh session.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2012 13:00:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing block Magic "
},
{
"msg_contents": "I restart and doesn't work. :( \n\nPlease help me!\n\n\n\n\n>________________________________\n> De: Tom Lane <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]> \n>Enviado: Martes 5 de junio de 2012 12:00\n>Asunto: Re: [PERFORM] Missing block Magic \n> \n>Alejandro Carrillo <[email protected]> writes:\n>> ERROR: biblioteca «C:\\Documents and Settings\\Administrador\\Escritorio\\test\\test.dll» incompatible: no se encuentra el bloque mágico\n>> HINT: Se requiere que las bibliotecas de extensión usen la macro PG_MODULE_MAGIC.\n>\n>[ scratches head ... ] Your source code looks fine. Are you sure you\n>are pointing to the right copy of the .dll file?\n>\n>If you added the PG_MODULE_MAGIC; line after getting this error, it's\n>likely that the old version of the .dll is already loaded into the\n>server's memory, in which case what you need to do to get rid of it\n>is to restart the server, or at least start a fresh session.\n>\n> regards, tom lane\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\nI restart and doesn't work. :( Please help me! De: Tom Lane <[email protected]> Para: Alejandro Carrillo <[email protected]> CC: \"[email protected]\" <[email protected]> Enviado:\n Martes 5 de junio de 2012 12:00 Asunto: Re: [PERFORM] Missing block Magic Alejandro Carrillo <[email protected]> writes:> ERROR: biblioteca «C:\\Documents and Settings\\Administrador\\Escritorio\\test\\test.dll» incompatible: no se encuentra el bloque mágico> HINT: Se requiere que las bibliotecas de extensión usen la macro PG_MODULE_MAGIC.[ scratches head ... ] Your source code looks fine. Are you sure youare pointing to the right copy of the .dll file?If you added the PG_MODULE_MAGIC; line after getting this error, it'slikely that the old version of the .dll is already loaded into theserver's memory, in which case what you need to do to get rid of itis to restart the server, or at least start a fresh\n session. regards, tom lane-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 5 Jun 2012 19:51:06 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing block Magic "
}
] |
[
{
"msg_contents": "I'm faced with a problem running postgres 9.1.3 which seems to\nnobody else see before. Tried to search and only one relevant\npost fond (about millions of files in pgsql_tmp).\n\nSympthoms:\n\nSome postgres process size is getting abnormally big compared\nto other postgres processes. Top shows the 'normal' pg processed\nis about VIRT 120m, RES ~30m and SHR ~30m. That one\nis about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g.\nWhen one more such a process appears the host going into\ndeep swap and pg restart can help only (actually the stop\nwon't even stop such a process - after shutdown it still alive\nand can be only killed).\n\nbase/pgsql_tmp contains millions of files. In this situation stop\nand dirty restart is possible - the normal startup is impossible\neither. Read somewhere that it tries to delete (a millions\nfiles) from that directory. I can't even imagine when it finish\nthe deletion so i'm simple move that folder outside the base\n- then start can succeed.\n\non ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz.\n8G RAM.\n\nDoes anybody see that behaviour or maybe have some glue how to\nhandle it.\n\nPS: the my preliminary conclusion: some sql is produces\na lot of files in the temporary table spaces - very quickly.\nWhen sql is finished postgres tries to cleanup the folder\nreading all contents of the folder and removing the files\none by one. It does the removal slow (watched the folder\nby `find pgsql_tmp | wc -l') but process still consumes the\nRAM. Next such sql will be a killer :(\n\nI'm faced with a problem running postgres 9.1.3 which seems tonobody else see before. Tried to search and only one relevantpost fond (about millions of files in pgsql_tmp).Sympthoms:\nSome postgres process size is getting abnormally big comparedto other postgres processes. Top shows the 'normal' pg processedis about VIRT 120m, RES ~30m and SHR ~30m. That one\nis about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g.When one more such a process appears the host going intodeep swap and pg restart can help only (actually the stopwon't even stop such a process - after shutdown it still alive\nand can be only killed).base/pgsql_tmp contains millions of files. In this situation stopand dirty restart is possible - the normal startup is impossibleeither. Read somewhere that it tries to delete (a millions\nfiles) from that directory. I can't even imagine when it finishthe deletion so i'm simple move that folder outside the base- then start can succeed.on ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz.\n8G RAM.Does anybody see that behaviour or maybe have some glue how tohandle it.PS: the my preliminary conclusion: some sql is producesa lot of files in the temporary table spaces - very quickly.\nWhen sql is finished postgres tries to cleanup the folderreading all contents of the folder and removing the filesone by one. It does the removal slow (watched the folderby `find pgsql_tmp | wc -l') but process still consumes the\nRAM. Next such sql will be a killer :(",
"msg_date": "Wed, 6 Jun 2012 18:05:56 +0600",
"msg_from": "Konstantin Mikhailov <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg 9.1 brings host machine down"
},
{
"msg_contents": "Hello.\n\nSeen this already.\nIt looks like cross join + sort. Badly configured ORM tools like \nHibernate with multiple one-to-many relationships fetched with 'join' \nstrategy may produce such result.\nUnfortunately I don't know if it's possible to protect from such a case \nat server side.\n\nBest regards, Vitalii Tymchyshyn\n\n06.06.12 15:05, Konstantin Mikhailov написав(ла):\n> I'm faced with a problem running postgres 9.1.3 which seems to\n> nobody else see before. Tried to search and only one relevant\n> post fond (about millions of files in pgsql_tmp).\n>\n> Sympthoms:\n>\n> Some postgres process size is getting abnormally big compared\n> to other postgres processes. Top shows the 'normal' pg processed\n> is about VIRT 120m, RES ~30m and SHR ~30m. That one\n> is about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g.\n> When one more such a process appears the host going into\n> deep swap and pg restart can help only (actually the stop\n> won't even stop such a process - after shutdown it still alive\n> and can be only killed).\n>\n> base/pgsql_tmp contains millions of files. In this situation stop\n> and dirty restart is possible - the normal startup is impossible\n> either. Read somewhere that it tries to delete (a millions\n> files) from that directory. I can't even imagine when it finish\n> the deletion so i'm simple move that folder outside the base\n> - then start can succeed.\n>\n> on ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz.\n> 8G RAM.\n>\n> Does anybody see that behaviour or maybe have some glue how to\n> handle it.\n>\n> PS: the my preliminary conclusion: some sql is produces\n> a lot of files in the temporary table spaces - very quickly.\n> When sql is finished postgres tries to cleanup the folder\n> reading all contents of the folder and removing the files\n> one by one. It does the removal slow (watched the folder\n> by `find pgsql_tmp | wc -l') but process still consumes the\n> RAM. Next such sql will be a killer :(\n>\n>\n\n",
"msg_date": "Wed, 06 Jun 2012 15:25:29 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 9.1 brings host machine down"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nwhich fs with which settings are you using? What's the work_mem settings? Which size do the files\nhave?\n\nDepending on the answer of above questions I would suggest:\n- - RAM disk, SSD or separate disk for pgsql_tmp\n- - using xfs with noatime,nodiratime,delaylog,logbufs=8,logbsize=256k,nobarrier for the tmp area\n- - separating pg_xlog on yet another disk (xfs, too, but with barrier)\n- - using deadline scheduler for all database disks\n- - increasing work_mem to at least the \"common\" file size +50%\n\nthere's more if I'd know more about the setup.\n\nhth,\n\nPatric\n\nVitalii Tymchyshyn schrieb am 06.06.2012 14:25:\n> Hello.\n> \n> Seen this already. It looks like cross join + sort. Badly configured ORM tools like Hibernate\n> with multiple one-to-many relationships fetched with 'join' strategy may produce such result. \n> Unfortunately I don't know if it's possible to protect from such a case at server side.\n> \n> Best regards, Vitalii Tymchyshyn\n> \n> 06.06.12 15:05, Konstantin Mikhailov написав(ла):\n>> I'm faced with a problem running postgres 9.1.3 which seems to nobody else see before. Tried\n>> to search and only one relevant post fond (about millions of files in pgsql_tmp).\n>> \n>> Sympthoms:\n>> \n>> Some postgres process size is getting abnormally big compared to other postgres processes.\n>> Top shows the 'normal' pg processed is about VIRT 120m, RES ~30m and SHR ~30m. That one is\n>> about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g. When one more such a process appears\n>> the host going into deep swap and pg restart can help only (actually the stop won't even stop\n>> such a process - after shutdown it still alive and can be only killed).\n>> \n>> base/pgsql_tmp contains millions of files. In this situation stop and dirty restart is\n>> possible - the normal startup is impossible either. Read somewhere that it tries to delete (a\n>> millions files) from that directory. I can't even imagine when it finish the deletion so i'm\n>> simple move that folder outside the base - then start can succeed.\n>> \n>> on ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz. 8G RAM.\n>> \n>> Does anybody see that behaviour or maybe have some glue how to handle it.\n>> \n>> PS: the my preliminary conclusion: some sql is produces a lot of files in the temporary table\n>> spaces - very quickly. When sql is finished postgres tries to cleanup the folder reading all\n>> contents of the folder and removing the files one by one. It does the removal slow (watched\n>> the folder by `find pgsql_tmp | wc -l') but process still consumes the RAM. Next such sql\n>> will be a killer :(\n>> \n>> \n> \n> \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.11 (GNU/Linux)\nComment: GnuPT 2.5.2\n\niEYEARECAAYFAk/PT7sACgkQfGgGu8y7ypCr+QCglfi5t4mllLrqVBTbk8SIHt7i\n2y8An2wzekmPmx7DsXDQ/h/t2lwDfYDs\n=BHRV\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 06 Jun 2012 14:40:30 +0200",
"msg_from": "Patric Bechtel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 9.1 brings host machine down"
},
{
"msg_contents": "if you have millions of files in data/pgsql_tmp it means that you're \nusing temporary tables (very) heavily .. or you've a huge sorting \nactivity (of large tables) and that the sort happens on disk (you can \nverify that with an EXPLAIN ANALYZE of the query, you'll see something \nlike \"external disk merge\").\nWhat you can do is either raise work_mem (be careful that it takes more \nspace to sort in memory than on disk), or add more RAM (and raise \nwork_mem too). By the way, it is generally a good idea to monitor that \ndirectory to get and idea of how much concurrent sorting is happening on \nyour database.\nIn some extreme case you can also create a dedicated tablespace and add \nthat tablespace to temp_tablespaces.**\n**\nOn 06/06/2012 14:05, Konstantin Mikhailov wrote:\n> I'm faced with a problem running postgres 9.1.3 which seems to\n> nobody else see before. Tried to search and only one relevant\n> post fond (about millions of files in pgsql_tmp).\n>\n> Sympthoms:\n>\n> Some postgres process size is getting abnormally big compared\n> to other postgres processes. Top shows the 'normal' pg processed\n> is about VIRT 120m, RES ~30m and SHR ~30m. That one\n> is about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g.\n> When one more such a process appears the host going into\n> deep swap and pg restart can help only (actually the stop\n> won't even stop such a process - after shutdown it still alive\n> and can be only killed).\n>\n> base/pgsql_tmp contains millions of files. In this situation stop\n> and dirty restart is possible - the normal startup is impossible\n> either. Read somewhere that it tries to delete (a millions\n> files) from that directory. I can't even imagine when it finish\n> the deletion so i'm simple move that folder outside the base\n> - then start can succeed.\n>\n> on ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz.\n> 8G RAM.\n>\n> Does anybody see that behaviour or maybe have some glue how to\n> handle it.\n>\n> PS: the my preliminary conclusion: some sql is produces\n> a lot of files in the temporary table spaces - very quickly.\n> When sql is finished postgres tries to cleanup the folder\n> reading all contents of the folder and removing the files\n> one by one. It does the removal slow (watched the folder\n> by `find pgsql_tmp | wc -l') but process still consumes the\n> RAM. Next such sql will be a killer :(\n>\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 06 Jun 2012 14:55:54 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 9.1 brings host machine down"
},
{
"msg_contents": "Thanks alot. I've tried to play with work_mem and after few days\nof the production testing pg behaves much better. See no more\nfiles in the pgsql_tmp folder. pg processes consumes reasonable\nmemory, no swap operation any more. I've studied official pg\ndocs about work_mem an still have no idea which optimal value\nwork_mem should have. 1MB is obviously too small. I've increased\nup to 32m. due to a lot of the sorts and hash joins in the queries.\n\n\nOn Wed, Jun 6, 2012 at 6:40 PM, Patric Bechtel <[email protected]>wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hi,\n>\n> which fs with which settings are you using? What's the work_mem settings?\n> Which size do the files\n> have?\n>\n> Depending on the answer of above questions I would suggest:\n> - - RAM disk, SSD or separate disk for pgsql_tmp\n> - - using xfs with\n> noatime,nodiratime,delaylog,logbufs=8,logbsize=256k,nobarrier for the tmp\n> area\n> - - separating pg_xlog on yet another disk (xfs, too, but with barrier)\n> - - using deadline scheduler for all database disks\n> - - increasing work_mem to at least the \"common\" file size +50%\n>\n> there's more if I'd know more about the setup.\n>\n> hth,\n>\n> Patric\n>\n> Vitalii Tymchyshyn schrieb am 06.06.2012 14:25:\n> > Hello.\n> >\n> > Seen this already. It looks like cross join + sort. Badly configured ORM\n> tools like Hibernate\n> > with multiple one-to-many relationships fetched with 'join' strategy may\n> produce such result.\n> > Unfortunately I don't know if it's possible to protect from such a case\n> at server side.\n> >\n> > Best regards, Vitalii Tymchyshyn\n> >\n> > 06.06.12 15:05, Konstantin Mikhailov написав(ла):\n> >> I'm faced with a problem running postgres 9.1.3 which seems to nobody\n> else see before. Tried\n> >> to search and only one relevant post fond (about millions of files in\n> pgsql_tmp).\n> >>\n> >> Sympthoms:\n> >>\n> >> Some postgres process size is getting abnormally big compared to other\n> postgres processes.\n> >> Top shows the 'normal' pg processed is about VIRT 120m, RES ~30m and\n> SHR ~30m. That one is\n> >> about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g. When one more\n> such a process appears\n> >> the host going into deep swap and pg restart can help only (actually\n> the stop won't even stop\n> >> such a process - after shutdown it still alive and can be only killed).\n> >>\n> >> base/pgsql_tmp contains millions of files. In this situation stop and\n> dirty restart is\n> >> possible - the normal startup is impossible either. Read somewhere that\n> it tries to delete (a\n> >> millions files) from that directory. I can't even imagine when it\n> finish the deletion so i'm\n> >> simple move that folder outside the base - then start can succeed.\n> >>\n> >> on ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz. 8G RAM.\n> >>\n> >> Does anybody see that behaviour or maybe have some glue how to handle\n> it.\n> >>\n> >> PS: the my preliminary conclusion: some sql is produces a lot of files\n> in the temporary table\n> >> spaces - very quickly. When sql is finished postgres tries to cleanup\n> the folder reading all\n> >> contents of the folder and removing the files one by one. It does the\n> removal slow (watched\n> >> the folder by `find pgsql_tmp | wc -l') but process still consumes the\n> RAM. Next such sql\n> >> will be a killer :(\n> >>\n> >>\n> >\n> >\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.11 (GNU/Linux)\n> Comment: GnuPT 2.5.2\n>\n> iEYEARECAAYFAk/PT7sACgkQfGgGu8y7ypCr+QCglfi5t4mllLrqVBTbk8SIHt7i\n> 2y8An2wzekmPmx7DsXDQ/h/t2lwDfYDs\n> =BHRV\n> -----END PGP SIGNATURE-----\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks alot. I've tried to play with work_mem and after few daysof the production testing pg behaves much better. See no morefiles in the pgsql_tmp folder. pg processes consumes reasonablememory, no swap operation any more. I've studied official pg\ndocs about work_mem an still have no idea which optimal valuework_mem should have. 1MB is obviously too small. I've increasedup to 32m. due to a lot of the sorts and hash joins in the queries.\nOn Wed, Jun 6, 2012 at 6:40 PM, Patric Bechtel <[email protected]> wrote:\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nwhich fs with which settings are you using? What's the work_mem settings? Which size do the files\nhave?\n\nDepending on the answer of above questions I would suggest:\n- - RAM disk, SSD or separate disk for pgsql_tmp\n- - using xfs with noatime,nodiratime,delaylog,logbufs=8,logbsize=256k,nobarrier for the tmp area\n- - separating pg_xlog on yet another disk (xfs, too, but with barrier)\n- - using deadline scheduler for all database disks\n- - increasing work_mem to at least the \"common\" file size +50%\n\nthere's more if I'd know more about the setup.\n\nhth,\n\nPatric\n\nVitalii Tymchyshyn schrieb am 06.06.2012 14:25:\n> Hello.\n>\n> Seen this already. It looks like cross join + sort. Badly configured ORM tools like Hibernate\n> with multiple one-to-many relationships fetched with 'join' strategy may produce such result.\n> Unfortunately I don't know if it's possible to protect from such a case at server side.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n> 06.06.12 15:05, Konstantin Mikhailov написав(ла):\n>> I'm faced with a problem running postgres 9.1.3 which seems to nobody else see before. Tried\n>> to search and only one relevant post fond (about millions of files in pgsql_tmp).\n>>\n>> Sympthoms:\n>>\n>> Some postgres process size is getting abnormally big compared to other postgres processes.\n>> Top shows the 'normal' pg processed is about VIRT 120m, RES ~30m and SHR ~30m. That one is\n>> about 6500m, 3.4g, 30m corresp. Total RAM avail - 8g. When one more such a process appears\n>> the host going into deep swap and pg restart can help only (actually the stop won't even stop\n>> such a process - after shutdown it still alive and can be only killed).\n>>\n>> base/pgsql_tmp contains millions of files. In this situation stop and dirty restart is\n>> possible - the normal startup is impossible either. Read somewhere that it tries to delete (a\n>> millions files) from that directory. I can't even imagine when it finish the deletion so i'm\n>> simple move that folder outside the base - then start can succeed.\n>>\n>> on ubuntu 11.10,12.04 x64. cpu intel core Q9650 3GHz. 8G RAM.\n>>\n>> Does anybody see that behaviour or maybe have some glue how to handle it.\n>>\n>> PS: the my preliminary conclusion: some sql is produces a lot of files in the temporary table\n>> spaces - very quickly. When sql is finished postgres tries to cleanup the folder reading all\n>> contents of the folder and removing the files one by one. It does the removal slow (watched\n>> the folder by `find pgsql_tmp | wc -l') but process still consumes the RAM. Next such sql\n>> will be a killer :(\n>>\n>>\n>\n>\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.11 (GNU/Linux)\nComment: GnuPT 2.5.2\n\niEYEARECAAYFAk/PT7sACgkQfGgGu8y7ypCr+QCglfi5t4mllLrqVBTbk8SIHt7i\n2y8An2wzekmPmx7DsXDQ/h/t2lwDfYDs\n=BHRV\n-----END PGP SIGNATURE-----\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 8 Jun 2012 23:52:58 +0600",
"msg_from": "Konstantin Mikhailov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg 9.1 brings host machine down"
},
{
"msg_contents": "On 06/09/2012 01:52 AM, Konstantin Mikhailov wrote:\n> Thanks alot. I've tried to play with work_mem and after few days\n> of the production testing pg behaves much better. See no more\n> files in the pgsql_tmp folder. pg processes consumes reasonable\n> memory, no swap operation any more. I've studied official pg\n> docs about work_mem an still have no idea which optimal value\n> work_mem should have. 1MB is obviously too small. I've increased\n> up to 32m. due to a lot of the sorts and hash joins in the queries.\n>\nThe trouble is that the optimal work_mem depends on your workload and \nhardware. Or that's my understanding, anyway.\n\nA workload with a few simple queries that sort lots of big data might \nwant work_mem to be really huge (but not so huge that it causes \nthrashing or pushes indexes out of cache).\n\nA workload with lots of really complicated queries full of CTEs, \nsubqueries, etc might use several times work_mem per connection, and if \nthere are lots of connections at once might use unexpectedly large \namounts of RAM and cause thrashing or cache competition even with quite \na small work_mem.\n\nRight now, Pg doesn't have the diagnostic tools or automatic tuning to \nmake it possible to determine an ideal value in any simple way, so it's \nmostly a matter of examining query plans, tuning, and monitoring. \nAutomatic tuning of work_mem would be great, but would also probably be \n_really_ hard, and still wouldn't solve the problem where n sorts can \nconsume n times work_mem, so you can't give complicated_query a strict \nenough work_mem limit without severely starving big_simple_query or \nhaving to run a session-local \"SET work_mem\" before it.\n\nA system for auto-tuning Pg at runtime would be amazing, but also very \n_very_ hard, so tweaking params based on benchmarking and examination of \nruntime performance is your only real option for now.\n\n\n--\nCraig Ringer\n",
"msg_date": "Sun, 10 Jun 2012 11:26:41 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg 9.1 brings host machine down"
}
] |
[
{
"msg_contents": "Hi.\n\nWe are handling multiple concurrent clients connecting to our system - trying to get a license seat (each license has an initial capacity of seats).\nWe have a table which keeps count of the acquired seats for each license.\nWhen a client tries to acquire a seat we first make sure that the number of acquired seats is less than the license capacity.\nWe then increase the number of acquired seats by 1.\n\nOur main problem here is with the acquired seats table.\nIt is actually a shared resource which needs to be updated concurrently by multiple transactions.\n\nWhen multiple transactions are running concurrently - each transaction takes a long time to complete because it waits on the lock for the shared resource table.\n\nAny suggestions for better implementation/design of this feature would be much appreciated.\n\nRegards,\nNir.\n\n\n\n\n\n\n\n\n\n\n\n\nHi.\n \nWe are handling multiple concurrent clients connecting to our system - trying to get a license seat (each license has an initial capacity of seats).\nWe have a table which keeps count of the acquired seats for each license.\nWhen a client tries to acquire a seat we first make sure that the number of acquired seats is less than the license capacity.\nWe then increase the number of acquired seats by 1.\n \nOur main problem here is with the acquired seats table.\nIt is actually a shared resource which needs to be updated concurrently by multiple transactions.\n \nWhen multiple transactions are running concurrently - each transaction takes a long time to complete because it waits on the lock for the shared resource table.\n \nAny suggestions for better implementation/design of this feature would be much appreciated.\n \nRegards,\nNir.",
"msg_date": "Thu, 7 Jun 2012 10:53:48 +0300",
"msg_from": "Nir Zilberman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multiple Concurrent Updates of Shared Resource Counter"
},
{
"msg_contents": "On Thu, Jun 7, 2012 at 9:53 AM, Nir Zilberman <[email protected]> wrote:\n> We are handling multiple concurrent clients connecting to our system -\n> trying to get a license seat (each license has an initial capacity of\n> seats).\n> We have a table which keeps count of the acquired seats for each license.\n> When a client tries to acquire a seat we first make sure that the number of\n> acquired seats is less than the license capacity.\n> We then increase the number of acquired seats by 1.\n>\n> Our main problem here is with the acquired seats table.\n> It is actually a shared resource which needs to be updated concurrently by\n> multiple transactions.\n>\n> When multiple transactions are running concurrently - each transaction takes\n> a long time to complete because it waits on the lock for the shared resource\n> table.\n>\n> Any suggestions for better implementation/design of this feature would be\n> much appreciated.\n\nWell, there are the usual suspects for lock contention\n\n1. Reduce time a lock needs to be held.\n2. Increase granularity of locking.\n\nad 1)\n\nIt sounds as if you need two statements for check and increase. That\ncan easily be done with a single statement if you use check\nconstraints. Example:\n\n$ psql -ef seats.sql\ndrop table licenses;\nDROP TABLE\ncreate table licenses (\n name varchar(200) primary key,\n max_seats integer not null check ( max_seats >= 0 ),\n current_seats integer not null default 0 check ( current_seats >= 0\nand current_seats <= max_seats )\n);\npsql:seats.sql:6: NOTICE: CREATE TABLE / PRIMARY KEY will create\nimplicit index \"licenses_pkey\" for table \"licenses\"\nCREATE TABLE\ninsert into licenses (name, max_seats) values ('foo', 4);\nINSERT 0 1\nupdate licenses set current_seats = current_seats + 1 where name = 'foo';\nUPDATE 1\nupdate licenses set current_seats = current_seats + 1 where name = 'foo';\nUPDATE 1\nupdate licenses set current_seats = current_seats + 1 where name = 'foo';\nUPDATE 1\nupdate licenses set current_seats = current_seats + 1 where name = 'foo';\nUPDATE 1\nupdate licenses set current_seats = current_seats + 1 where name = 'foo';\npsql:seats.sql:12: ERROR: new row for relation \"licenses\" violates\ncheck constraint \"licenses_check\"\nupdate licenses set current_seats = current_seats - 1 where name = 'foo';\nUPDATE 1\n\nThe increase will fail and you can react on that. Another scheme is to use\n\nupdate licenses set current_seats = current_seats + 1\nwhere name = 'foo' and current_seats < max_seats;\n\nand check how many rows where changed.\n\nIf however your transaction covers increase of used license seat\ncount, other work and finally decrease used license seat count you\nneed to change your transaction handling. You rather want three TX:\n\nstart TX\nupdate licenses set current_seats = current_seats + 1 where name = 'foo';\ncommit\n\nif OK\n start TX\n main work\n commit / rollback\n\n start TX\n update licenses set current_seats = current_seats - 1 where name = 'foo';\n commit\nend\n\nad 2)\nAt the moment I don't see a mechanism how that could be achieved in\nyour case. Distribution of counters of a single license across\nmultiple rows and checking via SUM(current_seats) is not concurrency\nsafe because of MVCC.\n\nGenerally checking licenses via a relational database does neither\nseem very robust nor secure. As long as someone has administrative\naccess to the database or regular access to the particular database\nlimits and counts can be arbitrarily manipulated. License servers I\nhave seen usually work by managing seats in memory and counting\nlicense usage via network connections. That has the advantage that\nthe OS quite reliably informs the license server if a client dies.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 7 Jun 2012 13:48:28 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Concurrent Updates of Shared Resource Counter"
},
{
"msg_contents": "Le jeudi 7 juin 2012 09:53:48, Nir Zilberman a écrit :\n> Hi.\n> \n> We are handling multiple concurrent clients connecting to our system -\n> trying to get a license seat (each license has an initial capacity of\n> seats). We have a table which keeps count of the acquired seats for each\n> license. When a client tries to acquire a seat we first make sure that the\n> number of acquired seats is less than the license capacity. We then\n> increase the number of acquired seats by 1.\n> \n> Our main problem here is with the acquired seats table.\n> It is actually a shared resource which needs to be updated concurrently by\n> multiple transactions.\n> \n> When multiple transactions are running concurrently - each transaction\n> takes a long time to complete because it waits on the lock for the shared\n> resource table.\n> \n> Any suggestions for better implementation/design of this feature would be\n> much appreciated.\n\nmaybe you can manage something around UNIQUE (license_id,license_seat_number).\n\nIt depends of what you achieve, and the tables structures you have.\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Fri, 8 Jun 2012 12:59:53 +0200",
"msg_from": "=?iso-8859-15?q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Concurrent Updates of Shared Resource Counter"
}
] |
[
{
"msg_contents": "Could somebody confirm or refute the following statements, please?\n\n- The statistics gathered by ANALYZE are independent of the tablespace\n containing the table.\n- The tablespace containing the table has no influence on query planning\n unless seq_page_cost or random_page_cost has been set on the\ntablespace.\n- VACUUM ANALYZE does the same as VACUUM followed by ANALYZE.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 8 Jun 2012 10:15:18 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tablespaces and query planning"
},
{
"msg_contents": "> - The statistics gathered by ANALYZE are independent of the tablespace\n> containing the table.\n\nyes.\n\n> - The tablespace containing the table has no influence on query planning\n> unless seq_page_cost or random_page_cost has been set on the\n> tablespace.\n\nyes.\n\n> - VACUUM ANALYZE does the same as VACUUM followed by ANALYZE.\n\nno.\nit is fine grained, but in the diffs there is:\n\n VACUUM and ANALYSE do not update pg_class the same way for the \nreltuples/relpages: for ex VACUUM is accurate for index, and ANALYZE is fuzzy \nso if you issue a vacuum you have exact values, if you then run ANALYZE you \nmay change them to be less precise.\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Fri, 8 Jun 2012 12:36:03 +0200",
"msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespaces and query planning"
},
{
"msg_contents": "Cédric Villemain wrote:\n> > - The statistics gathered by ANALYZE are independent of the tablespace\n> > containing the table.\n> \n> yes.\n> \n> > - The tablespace containing the table has no influence on query planning\n> > unless seq_page_cost or random_page_cost has been set on the\n> > tablespace.\n> \n> yes.\n> \n> > - VACUUM ANALYZE does the same as VACUUM followed by ANALYZE.\n> \n> no.\n> it is fine grained, but in the diffs there is:\n> \n> VACUUM and ANALYSE do not update pg_class the same way for the\n> reltuples/relpages: for ex VACUUM is accurate for index, and ANALYZE is fuzzy\n> so if you issue a vacuum you have exact values, if you then run ANALYZE you\n> may change them to be less precise.\n\nThanks for the confirmationsand the clarification. I hadn't thought of the\nstatistical entries in pg_class.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Mon, 11 Jun 2012 09:01:24 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tablespaces and query planning"
}
] |
[
{
"msg_contents": "Hello,\n\nI have noticed that with a SELECT query containing the following\nconstraint:\n\n column LIKE ?\n\nand an index on that column, PostgreSQL will not use the index\neven if the parameter doesn't contain special pattern characters\nsuch as %.\n\n From PG POV it might be logical, because, who is stupid enough to\nuse the LIKE operator if it's unneeded, right?\n\nHowever from my application POV the users sometimes want to\nprovide a pattern with % and sometimes a more precise condition,\nand of course, I am uneasy at writing two very similar SQL\nrequests with only the LIKE/= difference; in the end, the non use\nof an index means unwanted performance degradation.\n\nI have come with the following hack in the SQL:\n\n ( position('%' in ?) > 0 OR column = ? )\n AND ( position('%' in ?) = 0 OR column LIKE ? )\n\n(I know it doesn't cover all the pattern possibilities)\n\nAny thoughts on what would be the best approach? Mine looks a bit\nugly.\n\nThanks,\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "Fri, 08 Jun 2012 13:11:11 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "non index use on LIKE on a non pattern string"
},
{
"msg_contents": "> I have noticed that with a SELECT query containing the following\n> constraint:\n> \n> column LIKE ?\n> \n> and an index on that column, PostgreSQL will not use the index\n> even if the parameter doesn't contain special pattern characters\n> such as %.\n\nyou should have a postgresql 8.3,isn't it ?\n like is equal to \"=\" in your case, since 8.4\n\nAlso you probably want to have a look at\n http://www.postgresql.org/docs/9.1/static/indexes-opclass.html \nabout your index definition (add the \"text_pattern_ops\" when required)\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Fri, 8 Jun 2012 13:51:50 +0200",
"msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: non index use on LIKE on a non pattern string"
},
{
"msg_contents": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]> writes:\n>> I have noticed that with a SELECT query containing the following\n>> constraint:\n>> \n>> column LIKE ?\n>> \n>> and an index on that column, PostgreSQL will not use the index\n>> even if the parameter doesn't contain special pattern characters\n>> such as %.\n\n> you should have a postgresql 8.3,isn't it ?\n> like is equal to \"=\" in your case, since 8.4\n\nNo, the planner has understood about wildcard-free LIKE patterns\nproducing an \"=\" index condition at least since 7.3. I think what the\nOP is complaining about is the problem that the pattern has to be\nactually constant (ie, NOT a parameter) before it can be optimized into\nan index condition. This should be better in 9.2 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Jun 2012 09:57:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: non index use on LIKE on a non pattern string"
},
{
"msg_contents": "Le vendredi 8 juin 2012 15:57:07, Tom Lane a écrit :\n> =?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]> writes:\n> >> I have noticed that with a SELECT query containing the following\n> >> constraint:\n> >> \n> >> column LIKE ?\n> >> \n> >> and an index on that column, PostgreSQL will not use the index\n> >> even if the parameter doesn't contain special pattern characters\n> >> such as %.\n> > \n> > you should have a postgresql 8.3,isn't it ?\n> > \n> > like is equal to \"=\" in your case, since 8.4\n> \n> No, the planner has understood about wildcard-free LIKE patterns\n> producing an \"=\" index condition at least since 7.3. I think what the\n> OP is complaining about is the problem that the pattern has to be\n> actually constant (ie, NOT a parameter) before it can be optimized into\n> an index condition. This should be better in 9.2 ...\n\nOops, maybe I shuffled with this \n * xxx_pattern_ops indexes can now be used for simple equality comparisons, \nnot only for LIKE (Tom) \n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Fri, 8 Jun 2012 16:31:10 +0200",
"msg_from": "=?iso-8859-15?q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: non index use on LIKE on a non pattern string"
}
] |
[
{
"msg_contents": "I have a query like this:\nselect a.* from a inner join b on a.aid=b.aid where a.col1=33 a.col2=44 \nand b.bid=8\npostgresql selected the index on a.col1 then selected the index on \nb.bid. But in my situation, I know that the query will be faster if it \nchose the index on b.bid first since there are only a few rows with \nvalue 8. So I re-wrote the query as below:\nselect a.* from a where a.aid in (select aid from b where bid=8) and \na.col1=33 a.col2=44\nBut surprisingly, postgresql didn't change the plan. it still chose to \nindex scan on a.col1. How can I re-wirte the query so postgresql will \nscan on b.bid first?\n",
"msg_date": "Fri, 08 Jun 2012 21:33:08 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to change the index chosen in plan?"
},
{
"msg_contents": "Rural Hunter <[email protected]> writes:\n> I have a query like this:\n> select a.* from a inner join b on a.aid=b.aid where a.col1=33 a.col2=44 \n> and b.bid=8\n> postgresql selected the index on a.col1 then selected the index on \n> b.bid. But in my situation, I know that the query will be faster if it \n> chose the index on b.bid first since there are only a few rows with \n> value 8.\n\nIf you know that and the planner doesn't, maybe ANALYZE is called for.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Jun 2012 10:10:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "No, it's not the analyze problem. For some other values on b.bid such \nas 9, 10, the plan is fine since there a a lot of rows in table b for \nthem. But for some specific values such as 8 I want the plan changed.\n\n于2012年6月8日 22:10:58,Tom Lane写到:\n> Rural Hunter <[email protected]> writes:\n>> I have a query like this:\n>> select a.* from a inner join b on a.aid=b.aid where a.col1=33 a.col2=44\n>> and b.bid=8\n>> postgresql selected the index on a.col1 then selected the index on\n>> b.bid. But in my situation, I know that the query will be faster if it\n>> chose the index on b.bid first since there are only a few rows with\n>> value 8.\n>\n> If you know that and the planner doesn't, maybe ANALYZE is called for.\n>\n> \t\t\tregards, tom lane\n>\n\n\n",
"msg_date": "Fri, 08 Jun 2012 22:23:12 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "Rural Hunter <[email protected]> wrote:\n> 于2012年6月8日 22:10:58,Tom Lane写到:\n>> Rural Hunter <[email protected]> writes:\n>>> I have a query like this:\n>>> select a.* from a inner join b on a.aid=b.aid where a.col1=33\n>>> a.col2=44 and b.bid=8\n>>> postgresql selected the index on a.col1 then selected the index\n>>> on b.bid. But in my situation, I know that the query will be\n>>> faster if it chose the index on b.bid first since there are only\n>>> a few rows with value 8.\n>>\n>> If you know that and the planner doesn't, maybe ANALYZE is called\n>> for.\n>>\n> No, it's not the analyze problem.\n \nSo you ran ANALYZE and retried? If not, please do.\n \n> For some other values on b.bid such as 9, 10, the plan is fine\n> since there a a lot of rows in table b for them.\n \nSo it uses the same plan regardless of the number of rows in table b\nfor the value? That sure *sounds* like you need to run ANALYZE,\npossibly after adjusting the statistics target for a column or two.\n \n> But for some specific values such as 8 I want the plan changed.\n \nIf you approach it from that line of thought, you will be unlikely\nto reach a good long-term solution. PostgreSQL has a costing model\nto determine which plan is expected to be cheapest (fastest). This\nis based on statistics gathered during ANALYZE and on costing\nfactors. Generally, if it's not choosing the fastest plan, you\naren't running ANALYZE frequently enough or with a fine-grained\nenough statistics target _or_ you need to adjust your costing\nfactors to better model your actual costs.\n \nYou haven't given us a lot of clues about which it is that you need\nto do, but there is *some* suggestion that you need to ANALYZE. If\nyou *try* that and it doesn't solve your problem, please read this\npage and provide more information:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Fri, 08 Jun 2012 09:37:11 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "Hi Kevin,\nThanks for your detailed explanation.\n\n于 2012/6/8 22:37, Kevin Grittner 写道:\n> Rural Hunter <[email protected]> wrote:\n>> 于2012年6月8日 22:10:58,Tom Lane写到:\n>>> Rural Hunter <[email protected]> writes:\n>>>> I have a query like this:\n>>>> select a.* from a inner join b on a.aid=b.aid where a.col1=33\n>>>> a.col2=44 and b.bid=8\n>>>> postgresql selected the index on a.col1 then selected the index\n>>>> on b.bid. But in my situation, I know that the query will be\n>>>> faster if it chose the index on b.bid first since there are only\n>>>> a few rows with value 8.\n>>> If you know that and the planner doesn't, maybe ANALYZE is called\n>>> for.\n>>>\n>> No, it's not the analyze problem.\n> \n> So you ran ANALYZE and retried? If not, please do.\nYes, I did.\n> \n>> For some other values on b.bid such as 9, 10, the plan is fine\n>> since there a a lot of rows in table b for them.\n> \n> So it uses the same plan regardless of the number of rows in table b\n> for the value?\nyes.\n> That sure *sounds* like you need to run ANALYZE,\n> possibly after adjusting the statistics target for a column or two.\n How can adjust the statistics target?\n> \n>> But for some specific values such as 8 I want the plan changed.\n> \n> If you approach it from that line of thought, you will be unlikely\n> to reach a good long-term solution. PostgreSQL has a costing model\n> to determine which plan is expected to be cheapest (fastest). This\n> is based on statistics gathered during ANALYZE and on costing\n> factors. Generally, if it's not choosing the fastest plan, you\n> aren't running ANALYZE frequently enough or with a fine-grained\n> enough statistics target _or_ you need to adjust your costing\n> factors to better model your actual costs.\n> \n> You haven't given us a lot of clues about which it is that you need\n> to do, but there is *some* suggestion that you need to ANALYZE. If\n> you *try* that and it doesn't solve your problem, please read this\n> page and provide more information:\n> \n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\nSorry the actual tables and query are very complicated so I just \nsimplified the problem with my understanding. I rechecked the query and \nfound it should be simplified like this:\nselect a.* from a inner join b on a.aid=b.aid where a.col1=33 and \na.col2=44 and a.time<now() and b.bid=8 order by a.time limit 10\nThere is an index on (a.col1,a.col2,a.time). If I remove the order-by \nclause, I can get the plan as I expected. I think that's why postgresql \nselected that index. But still I want the index on b.bid selected first \nfor value 8 since there are only several rows with bid 8. though for \nother normal values there might be several kilo to million rows.\n> \n> -Kevin\n>\n\n\n",
"msg_date": "Sat, 09 Jun 2012 00:23:02 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "Rural Hunter <[email protected]> wrote:\n \n> How can adjust the statistics target?\n \ndefault_statistics_target\n \nhttp://www.postgresql.org/docs/current/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n \nor ALTER TABLE x ALTER COLUMN y SET STATISTICS n\n \nhttp://www.postgresql.org/docs/current/interactive/sql-altertable.html\n \n> Sorry the actual tables and query are very complicated so I just \n> simplified the problem with my understanding. I rechecked the\n> query and found it should be simplified like this:\n> select a.* from a inner join b on a.aid=b.aid where a.col1=33 and \n> a.col2=44 and a.time<now() and b.bid=8 order by a.time limit 10\n> There is an index on (a.col1,a.col2,a.time). If I remove the\n> order-by clause, I can get the plan as I expected. I think that's\n> why postgresql selected that index.\n \nSounds like it expects the sort to be expensive, which means it\nprobably expects a large number of rows. An EXPLAIN ANALYZE of the\nquery with and without the ORDER BY might be instructive. It would\nalso help to know what version of PostgreSQL you have and how it is\nconfigured, all of which shows up in the results of the query on\nthis page:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n> But still I want the index on b.bid selected first \n> for value 8 since there are only several rows with bid 8. though\n> for other normal values there might be several kilo to million\n> rows.\n \nAn EXPLAIN ANALYZE of one where you think the plan is a good choice\nmight also help.\n \nOh, and just to be sure -- are you actually running queries with the\nliterals like you show, or are you using prepared statements with\nplaceholders and plugging the values in after the statement is\nprepared? Sample code, if possible, might help point to or\neliminate issues with a cached plan. If you're running through a\ncached plan, there is no way for it to behave differently based on\nthe value plugged into the query -- the plan has already been set\nbefore you get to that point.\n \n-Kevin\n",
"msg_date": "Fri, 08 Jun 2012 11:39:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "于 2012/6/9 0:39, Kevin Grittner 写道:\n> Rural Hunter <[email protected]> wrote:\n> \n>> How can adjust the statistics target?\n> \n> default_statistics_target\n> \n> http://www.postgresql.org/docs/current/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> or ALTER TABLE x ALTER COLUMN y SET STATISTICS n\n> \n> http://www.postgresql.org/docs/current/interactive/sql-altertable.html\nThanks, I will check detail.\n> \n>> Sorry the actual tables and query are very complicated so I just\n>> simplified the problem with my understanding. I rechecked the\n>> query and found it should be simplified like this:\n>> select a.* from a inner join b on a.aid=b.aid where a.col1=33 and\n>> a.col2=44 and a.time<now() and b.bid=8 order by a.time limit 10\n>> There is an index on (a.col1,a.col2,a.time). If I remove the\n>> order-by clause, I can get the plan as I expected. I think that's\n>> why postgresql selected that index.\n> \n> Sounds like it expects the sort to be expensive, which means it\n> probably expects a large number of rows. An EXPLAIN ANALYZE of the\n> query with and without the ORDER BY might be instructive. It would\n> also help to know what version of PostgreSQL you have and how it is\n> configured, all of which shows up in the results of the query on\n> this page:\n> \n> http://wiki.postgresql.org/wiki/Server_Configuration\n> \nHere is the output:\nname | current_setting\n-----------------------------+---------------------------------------------------------------------------------------------------------------\nversion | PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc \n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\narchive_command | test ! -f /dbbk/postgres/logarch/%f.gz && gzip -c %p \n >/dbbk/postgres/logarch/%f.gz\narchive_mode | on\nautovacuum | on\nautovacuum_freeze_max_age | 2000000000\ncheckpoint_segments | 20\nclient_encoding | UTF8\neffective_cache_size | 150GB\nfull_page_writes | off\nlc_collate | zh_CN.utf8\nlc_ctype | zh_CN.utf8\nlisten_addresses | *\nlog_autovacuum_min_duration | 30min\nlog_destination | stderr\nlog_line_prefix | %t [%u@%h]\nlog_min_duration_statement | 10s\nlog_statement | ddl\nlogging_collector | on\nmaintenance_work_mem | 10GB\nmax_connections | 2500\nmax_stack_depth | 2MB\nmax_wal_senders | 1\nport | 3500\nserver_encoding | UTF8\nshared_buffers | 60GB\nsynchronous_commit | off\nTimeZone | PRC\ntrack_activities | on\ntrack_counts | on\nvacuum_freeze_table_age | 1000000000\nwal_buffers | 16MB\nwal_level | hot_standby\nwork_mem | 8MB\n(33 rows)\n\n>> But still I want the index on b.bid selected first\n>> for value 8 since there are only several rows with bid 8. though\n>> for other normal values there might be several kilo to million\n>> rows.\n> \n> An EXPLAIN ANALYZE of one where you think the plan is a good choice\n> might also help.\nOk, I get out a simple version of the actualy query. Here is the explain \nanaylze without order-by, which is I wanted:\nhttp://explain.depesz.com/s/p1p\n\nAnother with the order-by which I want to avoid:\nhttp://explain.depesz.com/s/ujU\n\nThis is the count of rows in article_label with value 3072(which I \nreferred as table b in previous mail):\n# select count(*) from article_label where lid=3072;\ncount\n-------\n56\n(1 row)\n\n> \n> Oh, and just to be sure -- are you actually running queries with the\n> literals like you show, or are you using prepared statements with\n> placeholders and plugging the values in after the statement is\n> prepared? Sample code, if possible, might help point to or\n> eliminate issues with a cached plan. If you're running through a\n> cached plan, there is no way for it to behave differently based on\n> the value plugged into the query -- the plan has already been set\n> before you get to that point.\nYes, I ran the query directly wih psql.\n> \n> -Kevin\n>\n\n\n",
"msg_date": "Sat, 09 Jun 2012 10:08:37 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the index chosen in plan?"
}
] |
[
{
"msg_contents": "Rural Hunter wrote:\n> 于 2012/6/9 0:39, Kevin Grittner 写道:\n \n> name | current_setting\n \n> full_page_writes | off\n \nThere may be exceptions on some file systems, but generally turning\nthis off leaves you vulnerable to possible database corruption if you\nOS or hardware crashes.\n \n> max_connections | 2500\n \nYikes! You may want to look in to a connection pooler which can take\n2500 client connections and funnel them into a much smaller number of\ndatabase connections.\n \nhttps://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n \n> shared_buffers | 60GB\n \nYou might want to compare your performance with this setting against\na smaller setting. Many benchmarks have shown settings about a\ncertain point (like 8MB to 12 MB) to be counter-productive, although\na few have shown increased performance going past that. It really\nseems to depend on your hardware and workload, so you have to test to\nfind the \"sweet spot\" for your environment.\n \n> work_mem | 8MB\n \nWith so many connections, I can understand being this low. One of\nthe advantages of using connection pooling to funnel your user\nconnections into fewer database conncections is that you can boost\nthis, which might help considerably with some types of queries.\n \nNone of the above, however, really gets to your immediate problem.\nWhat is most significant about your settings with regard to the\nproblem query is what's *not* in that list. You appear to have a\nheavily cached active data set, based on the row counts and timings\nin EXPLAIN ANALYZE output, and you have not adjusted your cost\nfactors, which assume less caching.\n \nTry setting these on a connection and then running your queries on\nthat connection.\n \nset seq_page_cost = 0.1;\nset random_page_cost = 0.1;\nset cpu_tuple_cost = 0.03;\n \n> Ok, I get out a simple version of the actualy query. Here is the\n> explain anaylze without order-by, which is I wanted:\n> http://explain.depesz.com/s/p1p\n>\n> Another with the order-by which I want to avoid:\n> http://explain.depesz.com/s/ujU\n \nYou neglected to mention the LIMIT clause in your earlier\npresentation of the problem. A LIMIT can have a big impact on plan\nchoice. Is the LIMIT 10 part of the actual query you want to\noptimize? Either way it would be helpful to see the EXPLAIN ANALYZE\noutput for the the query without the LIMIT clause.\n \n-Kevin\n",
"msg_date": "Sat, 09 Jun 2012 09:39:06 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "于 2012/6/9 22:39, Kevin Grittner 写道:\n> Rural Hunter wrote:\n>> 于 2012/6/9 0:39, Kevin Grittner 写道:\n> \n>> name | current_setting\n> \n>> full_page_writes | off\n> \n> There may be exceptions on some file systems, but generally turning\n> this off leaves you vulnerable to possible database corruption if you\n> OS or hardware crashes.\nYes, I understand. My situation is, the io utiliztion of my system is \nquite high so I turned this off to reduce the io utilization. We have a \nreplication server to serve as the hot standby if there is any issue on \nthe primary. So currently I think it's acceptable option to me.\n> \n>> max_connections | 2500\n> \n> Yikes! You may want to look in to a connection pooler which can take\n> 2500 client connections and funnel them into a much smaller number of\n> database connections.\n> \n> https://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n> \n>> shared_buffers | 60GB\n> \n> You might want to compare your performance with this setting against\n> a smaller setting. Many benchmarks have shown settings about a\n> certain point (like 8MB to 12 MB) to be counter-productive, although\n> a few have shown increased performance going past that. It really\n> seems to depend on your hardware and workload, so you have to test to\n> find the \"sweet spot\" for your environment.\n> \n>> work_mem | 8MB\n> \n> With so many connections, I can understand being this low. One of\n> the advantages of using connection pooling to funnel your user\n> connections into fewer database conncections is that you can boost\n> this, which might help considerably with some types of queries.\n> \n> None of the above, however, really gets to your immediate problem.\n> What is most significant about your settings with regard to the\n> problem query is what's *not* in that list. You appear to have a\n> heavily cached active data set, based on the row counts and timings\n> in EXPLAIN ANALYZE output, and you have not adjusted your cost\n> factors, which assume less caching.\nThanks for the advices. As of now we don't see overall performance issue \non the db. I will adjust these settings based on your advices if we \nbegin to see overall performance degrade.\n> \n> Try setting these on a connection and then running your queries on\n> that connection.\n> \n> set seq_page_cost = 0.1;\n> set random_page_cost = 0.1;\n> set cpu_tuple_cost = 0.03;\nI tried these settings but don't see noticeable improvement. The plan is \nnot changed.\n> \n>> Ok, I get out a simple version of the actualy query. Here is the\n>> explain anaylze without order-by, which is I wanted:\n>> http://explain.depesz.com/s/p1p\n>>\n>> Another with the order-by which I want to avoid:\n>> http://explain.depesz.com/s/ujU\n> \n> You neglected to mention the LIMIT clause in your earlier\n> presentation of the problem. A LIMIT can have a big impact on plan\n> choice. Is the LIMIT 10 part of the actual query you want to\n> optimize? Either way it would be helpful to see the EXPLAIN ANALYZE\n> output for the the query without the LIMIT clause.\nYes, sorry for that. I do need the limit clause in the query to show \nonly part of the results to the user(common multi-pages view). Without \nthe limit clause, I got the plan as I wanted:\nhttp://explain.depesz.com/s/Qdu\n\nSo looks either I remove the order-by or limit clause, I can get what I \nwanted. But I do need the both in the query...\n\n> \n> -Kevin\n>\n\n\n",
"msg_date": "Mon, 11 Jun 2012 12:46:41 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the index chosen in plan?"
}
] |
[
{
"msg_contents": "\"Kevin Grittner\" wrote:\n \n>> shared_buffers | 60GB\n> \n> You might want to compare your performance with this setting against\n> a smaller setting. Many benchmarks have shown settings about a\n> certain point (like 8MB to 12 MB) to be counter-productive, although\n> a few have shown increased performance going past that. It really\n> seems to depend on your hardware and workload, so you have to test to\n> find the \"sweet spot\" for your environment.\n \nEr, I meant \"8GB to 12GB\", not MB. Sorry.\n \n-Kevin\n",
"msg_date": "Sat, 09 Jun 2012 10:38:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the index chosen in plan?"
}
] |
[
{
"msg_contents": "Hi All;\n\nWe have a client that has a table where large blobs (bytea) are stored. \nthe table has a key column that is numbers (like 112362) but \nunfortunately it's a varchar column so the blobs are accessed via \nqueries like:\n\nselect * from bigtable where keycol = '217765'\n\nThe primary reason we want to partition the table is for maintenance, \nthe table is well over 1.2 Terabytes in size and they have never \nsuccessfully vacuumed it. However I don't want to make performance even \nworse. The table does have a serial key, I'm thinking the best options \nwill be to partition by range on the serial key, or maybe via the keycol \ncharacter column via using an in statement on the check constraints, \nthus allowing the planner to actually leverage the above sql. I suspect \ndoing a typecast to integer in the check constraints will prove to be a \nbad idea if the keycol column remains a varchar.\n\nThoughts?\n\nHere's the table:\n\n\n Table \"problemchild\"\n Column | Type | \nModifiers\n-----------+--------------------------+-------------------------------------------------------------------- \n\n keycol | character varying |\n blob_data | bytea |\n removed_date | timestamp with time zone |\n alt_key | bigint | not null default \nnextval('problemchild_alt_key_seq'::regclass)\nIndexes:\n \"pc_pkey\" PRIMARY KEY, btree (alt_key)\n \"key2\" btree (keycol)\n\n\n\nThanks in advance\n\n\n\n",
"msg_date": "Sat, 09 Jun 2012 11:58:44 -0600",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioning performance question"
},
{
"msg_contents": "On Sat, Jun 9, 2012 at 7:58 PM, Kevin Kempter\n<[email protected]> wrote:\n> Hi All;\n>\n> We have a client that has a table where large blobs (bytea) are stored. the\n> table has a key column that is numbers (like 112362) but unfortunately it's\n> a varchar column so the blobs are accessed via queries like:\n>\n> select * from bigtable where keycol = '217765'\n>\n> The primary reason we want to partition the table is for maintenance, the\n> table is well over 1.2 Terabytes in size and they have never successfully\n> vacuumed it. However I don't want to make performance even worse. The table\n> does have a serial key, I'm thinking the best options will be to partition\n> by range on the serial key, or maybe via the keycol character column via\n> using an in statement on the check constraints, thus allowing the planner to\n> actually leverage the above sql. I suspect doing a typecast to integer in\n> the check constraints will prove to be a bad idea if the keycol column\n> remains a varchar.\n>\n> Thoughts?\n>\n> Here's the table:\n>\n>\n> Table \"problemchild\"\n> Column | Type | Modifiers\n> -----------+--------------------------+--------------------------------------------------------------------\n> keycol | character varying |\n> blob_data | bytea |\n> removed_date | timestamp with time zone |\n> alt_key | bigint | not null default\n> nextval('problemchild_alt_key_seq'::regclass)\n> Indexes:\n> \"pc_pkey\" PRIMARY KEY, btree (alt_key)\n> \"key2\" btree (keycol)\n\nI find it odd that you have a column \"keycol\" which is not the PK and\nyour PK column is named \"alt_key\". Is \"keycol\" always the character\nrepresentation of \"alt_key\"? Are they unrelated?\n\nIt would also help to know how the data in this table changes. Do you\nonly ever add data? Is some data removed from time to time (maybe\nbased on the \"removed_date\")?\n\nIf the table grows continually then range partitioning sounds good.\nHowever, I think you should really make \"keycol\" a number type because\notherwise range partitioning will be a pain (you would need to include\nthe length of the string in the criterion if you want your varchar\nranges to mimic number ranges).\n\nHowever, if you are deleting from time to time and hence the table\ndoes not grow in the long run then hash partitioning might be a better\nidea because then you do not need to create new partitions all the\ntime. Example on alt_key\n\ncreate table problemchild (\n keycol varchar(100),\n blob_data bytea,\n removed_date timestamp with time zone,\n alt_key bigint primary key\n);\ncreate table problemchild_00 (\n check ( alt_key % 16 = 0 )\n) inherits (problemchild);\ncreate table problemchild_01 (\n check ( alt_key % 16 = 1 )\n) inherits (problemchild);\ncreate table problemchild_02 (\n check ( alt_key % 16 = 2 )\n) inherits (problemchild);\n...\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sun, 10 Jun 2012 13:16:40 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning performance question"
}
] |
[
{
"msg_contents": "What is the expected performance of cluster and what tuning parameters \nhave most effect?\n\nI have a set of 5 tables with identical structure (all inherit a common \ntable). The sizes given are total relation size. The clustering index is \na gist index on a (non null) geographic(linestring) column\n\n1. 327600 rows, 105MB, 15.8s\n2. 770165 rows, 232MB, 59.5s\n3. 1437041 rows, 424MB, 140s\n4. 3980922 rows, 1167MB, 276s\n5. 31843368 rows, 9709MB, ~ 10 hours\n\nServer is version 9.1. with postgis 1.5.4.\n\nRegards,\nMark Thornton\n\n",
"msg_date": "Sun, 10 Jun 2012 09:20:26 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of CLUSTER"
},
{
"msg_contents": "On 06/10/2012 03:20 AM, Mark Thornton wrote:\n\n> 4. 3980922 rows, 1167MB, 276s\n> 5. 31843368 rows, 9709MB, ~ 10 hours\n\nJust judging based on the difference between these two, it would appear \nto be from a lot of temp space thrashing. An order of magnitude more \nrows shouldn't take over 100x longer to cluster, even with GIST. What's \nyour maintenance_work_mem setting?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 11 Jun 2012 08:42:52 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of CLUSTER"
},
{
"msg_contents": "On 11/06/12 14:52, Shaun Thomas wrote:\n> On 06/11/2012 08:46 AM, Mark Thornton wrote:\n>\n>> 500m --- though isn't clear if cluster uses maintenance memory or the\n>> regular work memory. I could readily use a higher value for\n>> maintenance_work_mem.\n>\n> For an operation like that, having a full GB wouldn't hurt. Though if \n> you haven't already, you might think about pointing \nI didn't think the process was using even the 500m it ought to have had \navailable, whereas creating an index did appear to use that much. Note \nthough that I didn't stay up all night watching it!\n\n> your pgsql_tmp to /dev/shm for a while, even for just this operation.\n>\n> Then again, is your CPU at 100% during the entire operation?\nNo the CPU utilization is quite low. Most of the time is waiting for IO.\n\n> If it's not fetching anything from disk or writing out much, reducing \n> IO won't help. :) One deficiency we've had with CLUSTER is that it's a \n> synchronous operation. It does each thing one after the other. So \n> it'll organize the table contents, then it'll reindex each index \n> (including the primary key) one after the other. If you have a lot of \n> those, that can take a while, especially if any composite or complex \n> indexes exist.\nIn this case there are only two indexes, the gist one and a primary key \n(on a bigint value).\n\n>\n> You might actually be better off running parallel REINDEX commands on \n> the table (I wrote a script for this because we have a 250M row table \n> that each index takes 1-2.5 hours to build). You might also consider \n> pg_reorg, which seems to handle some parts of a table rebuild a little \n> better.\n>\n> That should give you an escalation pattern, though. :)\n\nThanks for your help,\nMark Thornton\n\n\n",
"msg_date": "Mon, 11 Jun 2012 15:02:23 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of CLUSTER"
},
{
"msg_contents": "On 06/11/2012 09:02 AM, Mark Thornton wrote:\n\n> I didn't think the process was using even the 500m it ought to have\n> had available, whereas creating an index did appear to use that much.\n> Note though that I didn't stay up all night watching it!\n\nYou'd be surprised. If you look in your base/pgsql_tmp directory during \na cluster of that table (make a copy of it if you don't want to \ninterfere with a running system) you should see that directory fill with \ntemporary structures, mostly during the index rebuild portions.\n\nIt also wouldn't hurt to bootstrap system cache with the contents of \nthat table. Do an explain analyze on SELECT * with no where clause and \nall of that table should be in memory.\n\nOh, actually that reminds me... does your 10GB table fit into memory? If \nnot, that might explain it right there.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 11 Jun 2012 09:14:52 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of CLUSTER"
},
{
"msg_contents": "On 06/11/2012 09:25 AM, Mark Thornton wrote:\n\n> Certainly not --- the server only has 5GB of memory. Nevertheless I\n> don't expect quadratic behaviour for CLUSTER (n log n would be my\n> expected time).\n\nAnd there it is. :)\n\nSince that's the case, *DO NOT* create the symlink from pgsql_tmp to \n/dev/shm like I suggested before. You don't have enough memory for that, \nand it will likely cause problems. I need to stop assuming everyone has \nhuge servers. I know low-end laptops have 4GB of RAM these days, but \nservers have longer shelf-lives, and VMs can play larger roles.\n\nSo here's the thing, and I should have honestly realized it the second I \nnoted the >100x jump in execution time. All of your previous tables fit \nin memory. Nice, speedy, >100x faster than disk, memory. It's not that \nthe table is only 10x larger than other tables in your examples, it's \nthat the entire thing doesn't fit in memory.\n\nSince it can't just read the table and assume it's in memory, reads have \na chance to fetch from disk. Since it's also maintaining several \ntemporary files for the new index and replacement table structures, it's \nfighting for random reads and writes during the whole process. That's in \naddition to any transaction log traffic and checkpoints since the \nprocess will span several.\n\nActually, your case is a good illustration of how memory and \nhigh-performance IO devices can reduce maintenance costs. If you played \naround with steadily increasing table sizes, I bet you could even find \nthe exact row count and table size where the table no longer fits in \nPostgreSQL or OS cache, and suddenly takes 100x longer to process. That \nkind of steady table growth is often seen in databases, and admins \nsometimes see this without understanding why it happens.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 11 Jun 2012 09:44:23 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of CLUSTER"
},
{
"msg_contents": "On Mon, 2012-06-11 at 08:42 -0500, Shaun Thomas wrote:\n> On 06/10/2012 03:20 AM, Mark Thornton wrote:\n> \n> > 4. 3980922 rows, 1167MB, 276s\n> > 5. 31843368 rows, 9709MB, ~ 10 hours\n> \n> Just judging based on the difference between these two, it would appear \n> to be from a lot of temp space thrashing. An order of magnitude more \n> rows shouldn't take over 100x longer to cluster, even with GIST. What's \n> your maintenance_work_mem setting?\n\nGiST can have a large impact depending on all kinds of factors,\nincluding data distribution.\n\n9.2 contains some important improvements in GiST build times for cases\nwhere the index doesn't fit in memory. Mark, can you please try your\nexperiments on the 9.2beta and tell us whether that helps you?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 11 Jun 2012 16:53:57 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of CLUSTER"
}
] |
[
{
"msg_contents": "Rural Hunter wrote:\n> 于 2012/6/9 22:39, Kevin Grittner 写道:\n \n>> You neglected to mention the LIMIT clause in your earlier\n>> presentation of the problem. A LIMIT can have a big impact on plan\n>> choice. Is the LIMIT 10 part of the actual query you want to\n>> optimize? Either way it would be helpful to see the EXPLAIN\n>> ANALYZE output for the the query without the LIMIT clause.\n> Yes, sorry for that. I do need the limit clause in the query to\n> show only part of the results to the user(common multi-pages view).\n> Without the limit clause, I got the plan as I wanted:\n> http://explain.depesz.com/s/Qdu\n>\n> So looks either I remove the order-by or limit clause, I can get\n> what I wanted. But I do need the both in the query...\n \nWell, we're still doing diagnostic steps. What this one shows is\nthat your statistics are leading the planner to believe that there\nwill be 20846 rows with lid = 3072, while there are really only 62.\nIf it knew the actual number I doubt it would choose the slower plan.\n \nThe next thing I would try is:\n \nALTER TABLE article_label ALTER COLUMN lid SET STATISTICS = 5000;\nANALYZE article_label;\n \nThen try the query without LIMIT and see if you get something on the\nright order of magnitude comparing the estimated rows to actual on\nthat index scan. You can try different STATISTICS values until you\nget the lowest value that puts the estimate in the right\nneighborhood. Higher settings will increase plan time; lower\nsettings may lead to bad plans.\n \nOnce you've got a decent estimate, try with the ORDER BY and LIMIT\nagain.\n \nIf you have a hard time getting a good estimate even with a high\nstatistics target, you should investigate whether you have extreme\ntable bloat.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jun 2012 07:07:51 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the index chosen in plan?"
},
{
"msg_contents": "于 2012/6/11 20:07, Kevin Grittner 写道:\n> Rural Hunter wrote:\n>> 于 2012/6/9 22:39, Kevin Grittner 写道:\n> \n>>> You neglected to mention the LIMIT clause in your earlier\n>>> presentation of the problem. A LIMIT can have a big impact on plan\n>>> choice. Is the LIMIT 10 part of the actual query you want to\n>>> optimize? Either way it would be helpful to see the EXPLAIN\n>>> ANALYZE output for the the query without the LIMIT clause.\n>> Yes, sorry for that. I do need the limit clause in the query to\n>> show only part of the results to the user(common multi-pages view).\n>> Without the limit clause, I got the plan as I wanted:\n>> http://explain.depesz.com/s/Qdu\n>>\n>> So looks either I remove the order-by or limit clause, I can get\n>> what I wanted. But I do need the both in the query...\n> \n> Well, we're still doing diagnostic steps. What this one shows is\n> that your statistics are leading the planner to believe that there\n> will be 20846 rows with lid = 3072, while there are really only 62.\n> If it knew the actual number I doubt it would choose the slower plan.\n> \n> The next thing I would try is:\n> \n> ALTER TABLE article_label ALTER COLUMN lid SET STATISTICS = 5000;\n> ANALYZE article_label;\n> \n> Then try the query without LIMIT and see if you get something on the\n> right order of magnitude comparing the estimated rows to actual on\n> that index scan. You can try different STATISTICS values until you\n> get the lowest value that puts the estimate in the right\n> neighborhood. Higher settings will increase plan time; lower\n> settings may lead to bad plans.\n> \n> Once you've got a decent estimate, try with the ORDER BY and LIMIT\n> again.\nI set statistics to 5000 and got estimated row count 559. Set statistics \nto 8000 and got estimated row count 393. At this step, I run the query \nwith both order-by and limit clause and got the expected result.\nKevin, Thank you very much for your patience and step-by-step guidance! \nI learnt a lot from this case!\n> \n> If you have a hard time getting a good estimate even with a high\n> statistics target, you should investigate whether you have extreme\n> table bloat.\n> \n> -Kevin\n>\n\n\n",
"msg_date": "Mon, 11 Jun 2012 20:55:14 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the index chosen in plan?"
}
] |
[
{
"msg_contents": "Hi, I have a table that I am clustering on an index.\nI am then dumping that table via pg_dump -Fc and loading it into another database via pg_restore.\nIt is unclear to me though if the clustering I did in the original database is preserved during the dump & restore or if I would still need to perform a CLUSTER again once the data was loaded into the new database.\n\nCan anyone confirm this?\n\nCheers,\n\nBritt\n\n\nHi, I have a table that I am clustering on an index. I am then dumping that table via pg_dump –Fc and loading it into another database via pg_restore. It is unclear to me though if the clustering I did in the original database is preserved during the dump & restore or if I would still need to perform a CLUSTER again once the data was loaded into the new database. Can anyone confirm this? Cheers, Britt",
"msg_date": "Mon, 11 Jun 2012 06:55:29 -0700",
"msg_from": "\"Fitch, Britt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres clustering interactions with pg_dump"
},
{
"msg_contents": "On Mon, Jun 11, 2012 at 9:55 AM, Fitch, Britt <[email protected]> wrote:\n> Hi, I have a table that I am clustering on an index.\n>\n> I am then dumping that table via pg_dump –Fc and loading it into another\n> database via pg_restore.\n>\n> It is unclear to me though if the clustering I did in the original database\n> is preserved during the dump & restore or if I would still need to perform a\n> CLUSTER again once the data was loaded into the new database.\n>\n> Can anyone confirm this?\n\nThe rows will end up in the new table in the same physical order that\nthey were stored in the dump file.\n\nYou might want to look at pg_stats.correlation for the clustered\ncolumn - that's often a good way to know whether things are ordered\nthe way you expect, and it's updated every time the table is analyzed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 Jul 2012 15:07:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres clustering interactions with pg_dump"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.