threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I have a system with around 330 databases running PostgreSQL 8.4.2\n\nWhat would the expected behavior be with AutoVacuum_NapTime set to the\ndefault of 1m and autovacuum_workers set to 3?\n\nWhat I'm observing is that the system is continuously vacuuming databases.\nWould these settings mean the autovacuum worker would try to vacuum all 330\ndatabases once per minute?\n\nGeorge Sexton\n\n",
"msg_date": "Sat, 20 Feb 2010 18:03:19 -0700",
"msg_from": "\"George Sexton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "AutoVacuum_NapTime"
},
{
"msg_contents": "\"George Sexton\" <[email protected]> writes:\n> I have a system with around 330 databases running PostgreSQL 8.4.2\n> What would the expected behavior be with AutoVacuum_NapTime set to the\n> default of 1m and autovacuum_workers set to 3?\n\nautovacuum_naptime is the cycle time for any one database, so you'd\nget an autovac worker launched every 60/330 seconds ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Feb 2010 20:15:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AutoVacuum_NapTime "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Saturday, February 20, 2010 6:15 PM\n> To: George Sexton\n> Cc: [email protected]\n> Subject: Re: [PERFORM] AutoVacuum_NapTime\n> \n> \"George Sexton\" <[email protected]> writes:\n> > I have a system with around 330 databases running PostgreSQL 8.4.2\n> > What would the expected behavior be with AutoVacuum_NapTime set to\n> the\n> > default of 1m and autovacuum_workers set to 3?\n> \n> autovacuum_naptime is the cycle time for any one database, so you'd\n> get an autovac worker launched every 60/330 seconds ...\n> \n> \t\t\tregards, tom lane\n\nThanks. That's non-optimal for my usage. I'll change it.\n\nAnother question then. Say I set it to 720 minutes, which if I understand\nthings would see each db done twice per day.\n\nIf I'm cold starting the system, would it vacuum all 330 databases and then\nwait 720 minutes and then do them all again, or would it distribute the\ndatabases more or less evenly over the time period?\n\nGeorge Sexton\nMH Software, Inc.\nhttp://www.mhsoftware.com/\nVoice: 303 438 9585\n\n",
"msg_date": "Sat, 20 Feb 2010 18:29:51 -0700",
"msg_from": "\"George Sexton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AutoVacuum_NapTime "
},
{
"msg_contents": "George Sexton wrote:\n\n> If I'm cold starting the system, would it vacuum all 330 databases and then\n> wait 720 minutes and then do them all again, or would it distribute the\n> databases more or less evenly over the time period?\n\nthe latter\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 22 Feb 2010 09:43:45 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AutoVacuum_NapTime"
}
] |
[
{
"msg_contents": "hi,\r\nSTACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. \r\nWhy we need so much memory? MySql need only no more than 100K. Where these memory allocated for? \r\nCan we do something to decrease this variable? Thanks.\nhi,STACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. Why we need so much memory? MySql need only no more than 100K. Where these memory allocated for? Can we do something to decrease this variable? Thanks.",
"msg_date": "Sun, 21 Feb 2010 10:53:09 +0800",
"msg_from": "\"=?ISO-8859-1?B?dGVycnk=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "can we optimize STACK_DEPTH_SLOP"
},
{
"msg_contents": "\"=?ISO-8859-1?B?dGVycnk=?=\" <[email protected]> writes:\n> STACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. \n> Why we need so much memory? MySql need only no more than 100K. Where these memory allocated for? \n\nThat's not memory, that's just address space. Cutting it will not\nreally buy you anything; it'll just increase your risk of stack-overflow\ncrashes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Feb 2010 22:25:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: can we optimize STACK_DEPTH_SLOP "
}
] |
[
{
"msg_contents": "hi,\r\nSTACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. \r\nWhy we need so much memory? MySql need only no more than 100K. Where these memory allocated for? \r\nCan we do something to decrease this variable? Thanks.\nhi,STACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. Why we need so much memory? MySql need only no more than 100K. Where these memory allocated for? Can we do something to decrease this variable? Thanks.",
"msg_date": "Sun, 21 Feb 2010 11:05:15 +0800",
"msg_from": "\"=?ISO-8859-1?B?dGVycnk=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "can we optimize STACK_DEPTH_SLOP"
}
] |
[
{
"msg_contents": "Thanks for your help!\r\nBut why we set STACK_DEPTH_SLOP to 512K, not 128K?\r\nWhat it according to?\nThanks for your help!But why we set STACK_DEPTH_SLOP to 512K, not 128K?What it according to?",
"msg_date": "Sun, 21 Feb 2010 13:44:57 +0800",
"msg_from": "\"=?ISO-8859-1?B?dGVycnk=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: can we optimize STACK_DEPTH_SLOP"
}
] |
[
{
"msg_contents": "I'm reading the docs for 8.4.2, section 18.4.1 Memory. I'm trying to figure\nout what reasonable values for my usage would be.\n\nThe doc says:\n\nshared_buffers (integer)\n\n Sets the amount of memory the database server uses for shared memory\nbuffers.\n\nWhile circular definitions are always right, I'm kind of looking for some\ninformation like:\n\n\"PostgreSQL uses shared memory buffers for ....\"\n\nCould someone please explain what the role of shared buffers is?\n\nGeorge Sexton\nMH Software, Inc. - Home of Connect Daily Web Calendar\nhttp://www.mhsoftware.com/\nVoice: 303 438 9585\n \n\n\n\n",
"msg_date": "Mon, 22 Feb 2010 08:04:59 -0700",
"msg_from": "\"George Sexton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffers"
},
{
"msg_contents": "\"George Sexton\" <[email protected]> wrote:\n \n> Could someone please explain what the role of shared buffers is?\n \nThis Wiki page might be useful to you:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nThe short answer (from that page) is:\n \n\"The shared_buffers configuration parameter determines how much\nmemory is dedicated to PostgreSQL use for caching data.\"\n \n-Kevin\n",
"msg_date": "Mon, 22 Feb 2010 09:23:00 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to make a select query in my plpgsql function to use an \nindex allowing an index scan instead of a seq scan.\n\nWhen running the query in the sql prompt, it works fine, but \napparently the index is not used for the same query in the plpgsql \nfunction.\n\nThe problem is not the data types of the parameters to the function or \nthe query, they are identical.\n\nWhen I tried using EXECUTE in the plpgsql function, the index is being \nused.\n\nI thought the query planner must have made a bad decision when I \ncreated the function the first time.\n\nI therefore tried to drop the function, disconnect from the sql \nclient, reconnect (to get a new session), create the function again.\nThe function still runs slow though.\n\nI cannot understand why the index is not being used when in the \nplpgsql function?\nI even tried to make a test function containing nothing more than the \nsingle query. Still the index is not being used.\nWhen running the same query in the sql prompt, the index is in use \nthough.\n\nIs there a way to someone clear the entire query cache or even better \nfor a particular plpgsql function?\n\nI'm greatful for any ideas.\n\nBest regards,\n\nJoel Jacobson\n\n",
"msg_date": "Mon, 22 Feb 2010 20:26:49 +0100",
"msg_from": "Joel Jacobson <[email protected]>",
"msg_from_op": true,
"msg_subject": "plpgsql plan cache"
},
{
"msg_contents": "\n> I cannot understand why the index is not being used when in the plpgsql \n> function?\n> I even tried to make a test function containing nothing more than the \n> single query. Still the index is not being used.\n> When running the same query in the sql prompt, the index is in use \n> though.\n\nPlease post the following :\n\n- EXPLAIN ANALYZE your query directly in psql\n- PREPARE testq AS your query\n- EXPLAIN ANALYZE EXECUTE testq( your parameters )\n",
"msg_date": "Mon, 22 Feb 2010 20:42:44 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql plan cache"
},
{
"msg_contents": "db=# \\d FlagValueAccountingTransactions\n Table \n\"public.flagvalueaccountingtransactions\"\n Column | Type \n| Modifiers\n---------------------+-------------------------- \n+ \n--------------------------------------------------------------------------\n flagvalueid | integer | not null\n eventid | integer | not null\n transactionid | integer | not null\n recorddate | timestamp with time zone | not null\n debitaccountnumber | integer | not null\n creditaccountnumber | integer | not null\n debitaccountname | character varying | not null\n creditaccountname | character varying | not null\n amount | numeric | not null\n currency | character(3) | not null\n seqid | integer | not null default \nnextval('seqflagvalueaccountingtransactions'::regclass)\n undone | smallint |\n undoneseqid | integer |\nIndexes:\n \"flagvalueaccountingtransactions_pkey\" PRIMARY KEY, btree (seqid)\n \"index_flagvalueaccountingtransactions_eventid\" btree (eventid)\n \"index_flagvalueaccountingtransactions_flagvalueid\" btree \n(flagvalueid)\n \"index_flagvalueaccountingtransactions_recorddate\" btree \n(recorddate)\n\ndb=# EXPLAIN ANALYZE SELECT SUM(Amount) FROM \nFlagValueAccountingTransactions WHERE FlagValueID = 182903 AND \n(RecordDate >= '2008-10-21' AND RecordDate < '2008-10-22') AND \nCreditAccountName = 'CLIENT_BALANCES' AND Currency = 'SEK';\n\n QUERY \n PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1291.74..1291.75 rows=1 width=7) (actual \ntime=1.812..1.812 rows=1 loops=1)\n -> Index Scan using \nindex_flagvalueaccountingtransactions_recorddate on \nflagvalueaccountingtransactions (cost=0.00..1291.68 rows=25 width=7) \n(actual time=1.055..1.807 rows=1 loops=1)\n Index Cond: ((recorddate >= '2008-10-21 \n00:00:00+02'::timestamp with time zone) AND (recorddate < '2008-10-22 \n00:00:00+02'::timestamp with time zone))\n Filter: ((flagvalueid = 182903) AND \n((creditaccountname)::text = 'CLIENT_BALANCES'::text) AND (currency = \n'SEK'::bpchar))\n Total runtime: 1.847 ms\n(5 rows)\n\ndb=# PREPARE myplan (integer,date,date,varchar,char(3)) AS SELECT \nSUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = \n$1 AND RecordDate >= $2 AND RecordDate < $3 AND DebitAccountName = $4 \nAND Currency = $5;PREPARE\nPREPARE\n\ndb=# EXPLAIN ANALYZE EXECUTE \nmyplan(182903,'2008-10-21','2008-10-22','CLIENT_BALANCES','SEK');\n\n QUERY \n PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3932.75..3932.76 rows=1 width=7) (actual \ntime=175.792..175.792 rows=1 loops=1)\n -> Bitmap Heap Scan on flagvalueaccountingtransactions \n(cost=2283.91..3932.74 rows=1 width=7) (actual time=175.747..175.767 \nrows=4 loops=1)\n Recheck Cond: ((recorddate >= $2) AND (recorddate < $3) AND \n(flagvalueid = $1))\n Filter: (((debitaccountname)::text = ($4)::text) AND \n(currency = $5))\n -> BitmapAnd (cost=2283.91..2283.91 rows=582 width=0) \n(actual time=175.714..175.714 rows=0 loops=1)\n -> Bitmap Index Scan on \nindex_flagvalueaccountingtransactions_recorddate (cost=0.00..395.97 \nrows=21536 width=0) (actual time=1.158..1.158 rows=3432 loops=1)\n Index Cond: ((recorddate >= $2) AND (recorddate \n< $3))\n -> Bitmap Index Scan on \nindex_flagvalueaccountingtransactions_flagvalueid (cost=0.00..1887.69 \nrows=116409 width=0) (actual time=174.132..174.132 rows=1338824 \nloops=1) Index Cond: (flagvalueid = $1)\n Total runtime: 175.879 ms\n(10 rows)\n\n\n\nHm, it is strange the query planner is using two different strategies \nfor the same query?\n\n\n\nOn Feb 22, 2010, at 8:42 PM, Pierre C wrote:\n\n>\n>> I cannot understand why the index is not being used when in the \n>> plpgsql function?\n>> I even tried to make a test function containing nothing more than \n>> the single query. Still the index is not being used.\n>> When running the same query in the sql prompt, the index is in use \n>> though.\n>\n> Please post the following :\n>\n> - EXPLAIN ANALYZE your query directly in psql\n> - PREPARE testq AS your query\n> - EXPLAIN ANALYZE EXECUTE testq( your parameters )\n\n\ndb=# \\d FlagValueAccountingTransactions Table \"public.flagvalueaccountingtransactions\" Column | Type | Modifiers ---------------------+--------------------------+-------------------------------------------------------------------------- flagvalueid | integer | not null eventid | integer | not null transactionid | integer | not null recorddate | timestamp with time zone | not null debitaccountnumber | integer | not null creditaccountnumber | integer | not null debitaccountname | character varying | not null creditaccountname | character varying | not null amount | numeric | not null currency | character(3) | not null seqid | integer | not null default nextval('seqflagvalueaccountingtransactions'::regclass) undone | smallint | undoneseqid | integer | Indexes: \"flagvalueaccountingtransactions_pkey\" PRIMARY KEY, btree (seqid) \"index_flagvalueaccountingtransactions_eventid\" btree (eventid) \"index_flagvalueaccountingtransactions_flagvalueid\" btree (flagvalueid) \"index_flagvalueaccountingtransactions_recorddate\" btree (recorddate)db=# EXPLAIN ANALYZE SELECT SUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = 182903 AND (RecordDate >= '2008-10-21' AND RecordDate < '2008-10-22') AND CreditAccountName = 'CLIENT_BALANCES' AND Currency = 'SEK'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=1291.74..1291.75 rows=1 width=7) (actual time=1.812..1.812 rows=1 loops=1) -> Index Scan using index_flagvalueaccountingtransactions_recorddate on flagvalueaccountingtransactions (cost=0.00..1291.68 rows=25 width=7) (actual time=1.055..1.807 rows=1 loops=1) Index Cond: ((recorddate >= '2008-10-21 00:00:00+02'::timestamp with time zone) AND (recorddate < '2008-10-22 00:00:00+02'::timestamp with time zone)) Filter: ((flagvalueid = 182903) AND ((creditaccountname)::text = 'CLIENT_BALANCES'::text) AND (currency = 'SEK'::bpchar)) Total runtime: 1.847 ms(5 rows)db=# PREPARE myplan (integer,date,date,varchar,char(3)) AS SELECT SUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = $1 AND RecordDate >= $2 AND RecordDate < $3 AND DebitAccountName = $4 AND Currency = $5;PREPAREPREPAREdb=# EXPLAIN ANALYZE EXECUTE myplan(182903,'2008-10-21','2008-10-22','CLIENT_BALANCES','SEK'); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=3932.75..3932.76 rows=1 width=7) (actual time=175.792..175.792 rows=1 loops=1) -> Bitmap Heap Scan on flagvalueaccountingtransactions (cost=2283.91..3932.74 rows=1 width=7) (actual time=175.747..175.767 rows=4 loops=1) Recheck Cond: ((recorddate >= $2) AND (recorddate < $3) AND (flagvalueid = $1)) Filter: (((debitaccountname)::text = ($4)::text) AND (currency = $5)) -> BitmapAnd (cost=2283.91..2283.91 rows=582 width=0) (actual time=175.714..175.714 rows=0 loops=1) -> Bitmap Index Scan on index_flagvalueaccountingtransactions_recorddate (cost=0.00..395.97 rows=21536 width=0) (actual time=1.158..1.158 rows=3432 loops=1) Index Cond: ((recorddate >= $2) AND (recorddate < $3)) -> Bitmap Index Scan on index_flagvalueaccountingtransactions_flagvalueid (cost=0.00..1887.69 rows=116409 width=0) (actual time=174.132..174.132 rows=1338824 loops=1) Index Cond: (flagvalueid = $1) Total runtime: 175.879 ms(10 rows)Hm, it is strange the query planner is using two different strategies for the same query?On Feb 22, 2010, at 8:42 PM, Pierre C wrote:I cannot understand why the index is not being used when in the plpgsql function?I even tried to make a test function containing nothing more than the single query. Still the index is not being used.When running the same query in the sql prompt, the index is in use though.Please post the following :- EXPLAIN ANALYZE your query directly in psql- PREPARE testq AS your query- EXPLAIN ANALYZE EXECUTE testq( your parameters )",
"msg_date": "Mon, 22 Feb 2010 20:58:10 +0100",
"msg_from": "Joel Jacobson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql plan cache"
},
{
"msg_contents": "Joel Jacobson <[email protected]> writes:\n> Hm, it is strange the query planner is using two different strategies \n> for the same query?\n\nThey're not the same query. One plan is generic for any value of the\nparameters, the other is chosen for specific values of those parameters.\nIn particular, the unparameterized query depends very strongly on the\nknowledge that not many rows will meet the RecordDate range constraint.\nIf you picked dates that were further apart you'd probably get something\nthat looked more like the other plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Feb 2010 16:20:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql plan cache "
},
{
"msg_contents": "The planner knows that that particular date range is quite selective so it\ndoesn't have to BitmapAnd two indexes together.\n\nThe problem is that a prepared statement asks the db to plan the query\nwithout knowing anything about the parameters. I think functions behave in\nexactly the same way. Its kind of a pain but you can do your query with\ndynamic sql like on here:\nhttp://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\nOn Mon, Feb 22, 2010 at 2:58 PM, Joel Jacobson <[email protected]> wrote:\n\n> db=# \\d FlagValueAccountingTransactions\n> Table\n> \"public.flagvalueaccountingtransactions\"\n> Column | Type |\n> Modifiers\n>\n> ---------------------+--------------------------+--------------------------------------------------------------------------\n> flagvalueid | integer | not null\n> eventid | integer | not null\n> transactionid | integer | not null\n> recorddate | timestamp with time zone | not null\n> debitaccountnumber | integer | not null\n> creditaccountnumber | integer | not null\n> debitaccountname | character varying | not null\n> creditaccountname | character varying | not null\n> amount | numeric | not null\n> currency | character(3) | not null\n> seqid | integer | not null default\n> nextval('seqflagvalueaccountingtransactions'::regclass)\n> undone | smallint |\n> undoneseqid | integer |\n> Indexes:\n> \"flagvalueaccountingtransactions_pkey\" PRIMARY KEY, btree (seqid)\n> \"index_flagvalueaccountingtransactions_eventid\" btree (eventid)\n> \"index_flagvalueaccountingtransactions_flagvalueid\" btree (flagvalueid)\n> \"index_flagvalueaccountingtransactions_recorddate\" btree (recorddate)\n>\n> db=# EXPLAIN ANALYZE SELECT SUM(Amount) FROM\n> FlagValueAccountingTransactions WHERE FlagValueID = 182903 AND (RecordDate\n> >= '2008-10-21' AND RecordDate < '2008-10-22') AND CreditAccountName =\n> 'CLIENT_BALANCES' AND Currency = 'SEK';\n>\n>\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1291.74..1291.75 rows=1 width=7) (actual\n> time=1.812..1.812 rows=1 loops=1)\n> -> Index Scan using index_flagvalueaccountingtransactions_recorddate on\n> flagvalueaccountingtransactions (cost=0.00..1291.68 rows=25 width=7)\n> (actual time=1.055..1.807 rows=1 loops=1)\n> Index Cond: ((recorddate >= '2008-10-21 00:00:00+02'::timestamp\n> with time zone) AND (recorddate < '2008-10-22 00:00:00+02'::timestamp with\n> time zone))\n> Filter: ((flagvalueid = 182903) AND ((creditaccountname)::text =\n> 'CLIENT_BALANCES'::text) AND (currency = 'SEK'::bpchar))\n> Total runtime: 1.847 ms\n> (5 rows)\n>\n> db=# PREPARE myplan (integer,date,date,varchar,char(3)) AS SELECT\n> SUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = $1 AND\n> RecordDate >= $2 AND RecordDate < $3 AND DebitAccountName = $4 AND Currency\n> = $5;PREPARE\n> PREPARE\n>\n> db=# EXPLAIN ANALYZE EXECUTE\n> myplan(182903,'2008-10-21','2008-10-22','CLIENT_BALANCES','SEK');\n>\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=3932.75..3932.76 rows=1 width=7) (actual\n> time=175.792..175.792 rows=1 loops=1)\n> -> Bitmap Heap Scan on flagvalueaccountingtransactions\n> (cost=2283.91..3932.74 rows=1 width=7) (actual time=175.747..175.767 rows=4\n> loops=1)\n> Recheck Cond: ((recorddate >= $2) AND (recorddate < $3) AND\n> (flagvalueid = $1))\n> Filter: (((debitaccountname)::text = ($4)::text) AND (currency =\n> $5))\n> -> BitmapAnd (cost=2283.91..2283.91 rows=582 width=0) (actual\n> time=175.714..175.714 rows=0 loops=1)\n> -> Bitmap Index Scan on\n> index_flagvalueaccountingtransactions_recorddate (cost=0.00..395.97\n> rows=21536 width=0) (actual time=1.158..1.158 rows=3432 loops=1)\n> Index Cond: ((recorddate >= $2) AND (recorddate < $3))\n> -> Bitmap Index Scan on\n> index_flagvalueaccountingtransactions_flagvalueid (cost=0.00..1887.69\n> rows=116409 width=0) (actual time=174.132..174.132 rows=1338824 loops=1)\n> Index Cond: (flagvalueid = $1)\n> Total runtime: 175.879 ms\n> (10 rows)\n>\n>\n>\n> Hm, it is strange the query planner is using two different strategies for\n> the same query?\n>\n>\n>\n> On Feb 22, 2010, at 8:42 PM, Pierre C wrote:\n>\n>\n> I cannot understand why the index is not being used when in the plpgsql\n> function?\n>\n> I even tried to make a test function containing nothing more than the\n> single query. Still the index is not being used.\n>\n> When running the same query in the sql prompt, the index is in use though.\n>\n>\n> Please post the following :\n>\n> - EXPLAIN ANALYZE your query directly in psql\n> - PREPARE testq AS your query\n> - EXPLAIN ANALYZE EXECUTE testq( your parameters )\n>\n>\n>\n\nThe planner knows that that particular date range is quite selective so it doesn't have to BitmapAnd two indexes together.The problem is that a prepared statement asks the db to plan the query without knowing anything about the parameters. I think functions behave in exactly the same way. Its kind of a pain but you can do your query with dynamic sql like on here: http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\nOn Mon, Feb 22, 2010 at 2:58 PM, Joel Jacobson <[email protected]> wrote:\ndb=# \\d FlagValueAccountingTransactions Table \"public.flagvalueaccountingtransactions\"\n Column | Type | Modifiers \n---------------------+--------------------------+--------------------------------------------------------------------------\n flagvalueid | integer | not null eventid | integer | not null\n transactionid | integer | not null recorddate | timestamp with time zone | not null\n debitaccountnumber | integer | not null creditaccountnumber | integer | not null\n debitaccountname | character varying | not null creditaccountname | character varying | not null\n amount | numeric | not null currency | character(3) | not null\n seqid | integer | not null default nextval('seqflagvalueaccountingtransactions'::regclass)\n undone | smallint | undoneseqid | integer | \nIndexes: \"flagvalueaccountingtransactions_pkey\" PRIMARY KEY, btree (seqid)\n \"index_flagvalueaccountingtransactions_eventid\" btree (eventid) \"index_flagvalueaccountingtransactions_flagvalueid\" btree (flagvalueid)\n \"index_flagvalueaccountingtransactions_recorddate\" btree (recorddate)\ndb=# EXPLAIN ANALYZE SELECT SUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = 182903 AND (RecordDate >= '2008-10-21' AND RecordDate < '2008-10-22') AND CreditAccountName = 'CLIENT_BALANCES' AND Currency = 'SEK';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=1291.74..1291.75 rows=1 width=7) (actual time=1.812..1.812 rows=1 loops=1)\n -> Index Scan using index_flagvalueaccountingtransactions_recorddate on flagvalueaccountingtransactions (cost=0.00..1291.68 rows=25 width=7) (actual time=1.055..1.807 rows=1 loops=1) Index Cond: ((recorddate >= '2008-10-21 00:00:00+02'::timestamp with time zone) AND (recorddate < '2008-10-22 00:00:00+02'::timestamp with time zone))\n Filter: ((flagvalueid = 182903) AND ((creditaccountname)::text = 'CLIENT_BALANCES'::text) AND (currency = 'SEK'::bpchar)) Total runtime: 1.847 ms(5 rows)\n\ndb=# PREPARE myplan (integer,date,date,varchar,char(3)) AS SELECT SUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = $1 AND RecordDate >= $2 AND RecordDate < $3 AND DebitAccountName = $4 AND Currency = $5;PREPARE\nPREPAREdb=# EXPLAIN ANALYZE EXECUTE myplan(182903,'2008-10-21','2008-10-22','CLIENT_BALANCES','SEK'); QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=3932.75..3932.76 rows=1 width=7) (actual time=175.792..175.792 rows=1 loops=1)\n -> Bitmap Heap Scan on flagvalueaccountingtransactions (cost=2283.91..3932.74 rows=1 width=7) (actual time=175.747..175.767 rows=4 loops=1) Recheck Cond: ((recorddate >= $2) AND (recorddate < $3) AND (flagvalueid = $1))\n Filter: (((debitaccountname)::text = ($4)::text) AND (currency = $5)) -> BitmapAnd (cost=2283.91..2283.91 rows=582 width=0) (actual time=175.714..175.714 rows=0 loops=1) -> Bitmap Index Scan on index_flagvalueaccountingtransactions_recorddate (cost=0.00..395.97 rows=21536 width=0) (actual time=1.158..1.158 rows=3432 loops=1)\n Index Cond: ((recorddate >= $2) AND (recorddate < $3)) -> Bitmap Index Scan on index_flagvalueaccountingtransactions_flagvalueid (cost=0.00..1887.69 rows=116409 width=0) (actual time=174.132..174.132 rows=1338824 loops=1) Index Cond: (flagvalueid = $1)\n Total runtime: 175.879 ms(10 rows)Hm, it is strange the query planner is using two different strategies for the same query?\nOn Feb 22, 2010, at 8:42 PM, Pierre C wrote:\nI cannot understand why the index is not being used when in the plpgsql function?I even tried to make a test function containing nothing more than the single query. Still the index is not being used.\nWhen running the same query in the sql prompt, the index is in use though.Please post the following :- EXPLAIN ANALYZE your query directly in psql- PREPARE testq AS your query\n\n- EXPLAIN ANALYZE EXECUTE testq( your parameters )",
"msg_date": "Mon, 22 Feb 2010 16:23:56 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql plan cache"
},
{
"msg_contents": "Thank you for explaining!\n\nNow I understand, makes perfect sense! :-)\n\n2010/2/22 Nikolas Everett <[email protected]>:\n> The planner knows that that particular date range is quite selective so it\n> doesn't have to BitmapAnd two indexes together.\n> The problem is that a prepared statement asks the db to plan the query\n> without knowing anything about the parameters. I think functions behave in\n> exactly the same way. Its kind of a pain but you can do your query with\n> dynamic sql like on here:\n> http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n>\n> On Mon, Feb 22, 2010 at 2:58 PM, Joel Jacobson <[email protected]> wrote:\n>>\n>> db=# \\d FlagValueAccountingTransactions\n>> Table\n>> \"public.flagvalueaccountingtransactions\"\n>> Column | Type |\n>> Modifiers\n>>\n>> ---------------------+--------------------------+--------------------------------------------------------------------------\n>> flagvalueid | integer | not null\n>> eventid | integer | not null\n>> transactionid | integer | not null\n>> recorddate | timestamp with time zone | not null\n>> debitaccountnumber | integer | not null\n>> creditaccountnumber | integer | not null\n>> debitaccountname | character varying | not null\n>> creditaccountname | character varying | not null\n>> amount | numeric | not null\n>> currency | character(3) | not null\n>> seqid | integer | not null default\n>> nextval('seqflagvalueaccountingtransactions'::regclass)\n>> undone | smallint |\n>> undoneseqid | integer |\n>> Indexes:\n>> \"flagvalueaccountingtransactions_pkey\" PRIMARY KEY, btree (seqid)\n>> \"index_flagvalueaccountingtransactions_eventid\" btree (eventid)\n>> \"index_flagvalueaccountingtransactions_flagvalueid\" btree\n>> (flagvalueid)\n>> \"index_flagvalueaccountingtransactions_recorddate\" btree (recorddate)\n>> db=# EXPLAIN ANALYZE SELECT SUM(Amount) FROM\n>> FlagValueAccountingTransactions WHERE FlagValueID = 182903 AND (RecordDate\n>> >= '2008-10-21' AND RecordDate < '2008-10-22') AND CreditAccountName =\n>> 'CLIENT_BALANCES' AND Currency = 'SEK';\n>>\n>> QUERY PLAN\n>>\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=1291.74..1291.75 rows=1 width=7) (actual\n>> time=1.812..1.812 rows=1 loops=1)\n>> -> Index Scan using index_flagvalueaccountingtransactions_recorddate\n>> on flagvalueaccountingtransactions (cost=0.00..1291.68 rows=25 width=7)\n>> (actual time=1.055..1.807 rows=1 loops=1)\n>> Index Cond: ((recorddate >= '2008-10-21 00:00:00+02'::timestamp\n>> with time zone) AND (recorddate < '2008-10-22 00:00:00+02'::timestamp with\n>> time zone))\n>> Filter: ((flagvalueid = 182903) AND ((creditaccountname)::text =\n>> 'CLIENT_BALANCES'::text) AND (currency = 'SEK'::bpchar))\n>> Total runtime: 1.847 ms\n>> (5 rows)\n>> db=# PREPARE myplan (integer,date,date,varchar,char(3)) AS SELECT\n>> SUM(Amount) FROM FlagValueAccountingTransactions WHERE FlagValueID = $1 AND\n>> RecordDate >= $2 AND RecordDate < $3 AND DebitAccountName = $4 AND Currency\n>> = $5;PREPARE\n>> PREPARE\n>> db=# EXPLAIN ANALYZE EXECUTE\n>> myplan(182903,'2008-10-21','2008-10-22','CLIENT_BALANCES','SEK');\n>>\n>> QUERY PLAN\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=3932.75..3932.76 rows=1 width=7) (actual\n>> time=175.792..175.792 rows=1 loops=1)\n>> -> Bitmap Heap Scan on flagvalueaccountingtransactions\n>> (cost=2283.91..3932.74 rows=1 width=7) (actual time=175.747..175.767 rows=4\n>> loops=1)\n>> Recheck Cond: ((recorddate >= $2) AND (recorddate < $3) AND\n>> (flagvalueid = $1))\n>> Filter: (((debitaccountname)::text = ($4)::text) AND (currency =\n>> $5))\n>> -> BitmapAnd (cost=2283.91..2283.91 rows=582 width=0) (actual\n>> time=175.714..175.714 rows=0 loops=1)\n>> -> Bitmap Index Scan on\n>> index_flagvalueaccountingtransactions_recorddate (cost=0.00..395.97\n>> rows=21536 width=0) (actual time=1.158..1.158 rows=3432 loops=1)\n>> Index Cond: ((recorddate >= $2) AND (recorddate <\n>> $3))\n>> -> Bitmap Index Scan on\n>> index_flagvalueaccountingtransactions_flagvalueid (cost=0.00..1887.69\n>> rows=116409 width=0) (actual time=174.132..174.132 rows=1338824 loops=1)\n>> Index Cond: (flagvalueid = $1)\n>> Total runtime: 175.879 ms\n>> (10 rows)\n>>\n>>\n>> Hm, it is strange the query planner is using two different strategies for\n>> the same query?\n>>\n>>\n>> On Feb 22, 2010, at 8:42 PM, Pierre C wrote:\n>>\n>> I cannot understand why the index is not being used when in the plpgsql\n>> function?\n>>\n>> I even tried to make a test function containing nothing more than the\n>> single query. Still the index is not being used.\n>>\n>> When running the same query in the sql prompt, the index is in use though.\n>>\n>> Please post the following :\n>>\n>> - EXPLAIN ANALYZE your query directly in psql\n>> - PREPARE testq AS your query\n>> - EXPLAIN ANALYZE EXECUTE testq( your parameters )\n>>\n>\n>\n\n\n\n-- \nBest regards,\n\nJoel Jacobson\nGlue Finance\n\nE: [email protected]\nT: +46 70 360 38 01\n\nPostal address:\nGlue Finance AB\nBox 549\n114 11 Stockholm\nSweden\n\nVisiting address:\nGlue Finance AB\nBirger Jarlsgatan 14\n114 34 Stockholm\nSweden\n",
"msg_date": "Mon, 22 Feb 2010 22:47:15 +0100",
"msg_from": "Joel Jacobson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql plan cache"
},
{
"msg_contents": "\nActually, planner was smart in using a bitmap index scan in the prepared \nquery. Suppose you later EXECUTE that canned plan with a date range which \ncovers say half of the table : the indexscan would be a pretty bad choice \nsince it would have to access half the rows in the table in index order, \nwhich is potentially random disk IO. Bitmap Index Scan is slower in your \nhigh-selectivity case, but it can withstand much more abuse on the \nparameters.\n\nPG supports the quite clever syntax of EXECUTE 'blah' USING params, you \ndon't even need to mess with quoting.\n",
"msg_date": "Mon, 22 Feb 2010 23:15:30 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql plan cache"
}
] |
[
{
"msg_contents": "Hi folks\n\nI have an application which collects performance stats at time intervals, to\nwhich I am retro-fitting a table partitioning scheme in order to improve\nscalability.\n\nThe original data is keyed by a 3-ary tuple of strings .... to keep the row\nsize down, in the new data model I'm actually storing 32-bit int's in\nPostgres. The new schema for each table looks like this:\n\n(a integer,\n b integer,\n c integer,\n ts timestamp without timezone,\n value double precision)\n\nwith two indexes: (a, b, ts) and (b, ts)\n\nIf the table had a primary key, it would be (a, b, c, ts) but I don't need\none, and I don't want to carry column \"c\" in the index as it isn't necessary\n... the number of distinct values of c for any (a,b) pair is small (less\nthan 10). Queries that read the data for graphing are of the form:\n\n..... where a=<constant> and b=<constant> and ts between <x> and <y>\n\nOne of the things I need to do periodically is roll up this data, e.g.\ncalculate min / max / average values of a given datum over a period for use\nin graphing at larger time scales. The rollup tables use the same schema as\nthe raw data ones.\n\nThere are about 60 different values of b, and for each such value there is a\nexactly one type of rollup. The old code is doing the rollups in Postgres\nwith 60 bulk \"insert into .... select\" statements, hence the need for the\nsecond index.\n\nFor rollups which are instrinsic SQL functions, and only depend on the value\ncolumn, this is straightforward, e.g. to take the averages:\n\ninsert into rollup_table\nselect a, 17, c, '2009-02-22 19:00', avg(value)\nfrom data_table\nwhere b=17\nand ts between '2009-02-22 18:00' and '2009-02-22 19:00'\ngroup by a, c;\n\nSo far so good.\n\nThis gets slightly more tricky if the rollup you want to do isn't a simple\nSQL function .... I have two such cases, (a) the most recent value and (b)\nthe modal (most common) value. In both cases, I've been doing this using a\ndependent subquery in the select clause, e.g. for the most recent value:\n\ninsert into rollup_table\nselect a, 23, c, '2009-02-23 00:00',\n (select y.value from\n (select z.value, z.ts from data_table z where z.a=.a and z.b=x.b and\nz.c=x.c) y\n order by y.ts desc limit 1)\nfrom data_table x\nwhere b=23\ngroup by a, c;\n\nDue to the second index, this actually performs quite well, but it of course\ndepends on doing a small index scan on the same data_table for each row of\nthe outer select. I have a few questions:\n\n*Q1.* Is this the right place for the subquery, or would it be better to\nmove it to the where clause, e.g.\n\ninsert into rollup_table\nselect a, 23, c, '2009-02-23 00:00', value\nfrom data_table x\nwhere b=23 and\n ts=(select max(y.ts) from data_table y where y.a=x.a and y.b=23 and\ny.c=x.c)\ngroup by a, c;\n\nor does the answer depend on row stats? Is there a rule of thumb on whether\nto prefer \"limit 1\" vs. \"max\"? There will be between 20 and 300 rows with\ndifferent timestamps for each (a,b,c) tuple.\n\n\nFor better scalability, I am partitioning these tables by time .... I am not\nusing PG's table inheritance and triggers to do the partitioning, but\ninstead dynamically generating the SQL and table names in the application\ncode (Java). In most cases, the rollups will still happen from a single\nsource \"data_table\" and I plan to continue using the existing SQL, but I\nhave a few cases where the source \"data_table\" rows may actually come from\ntwo adjacent tables.\n\nFor intrinsic SQL rollup functions like avg / min / max this is pretty easy:\n\ninsert into rollup_table\nselect a, 17, c, '... date ...', max(value) from (\n select ........ from data_table_1 where b=17 and .......\nunion all\n select ........ from data_table_2 where b=17 and .......\n)\ngroup by a, c;\n\n\n*Q2.* Is there any benefit to doing rollups (max() and group by) in the\ninner queries inside the UNION ALL, or is it a wash? For now I'm expecting\neach inner select to produce 20-200 rows per unqiue (a,b,c) combination, and\nto be unioning no more than 2 tables at a time. (I'm aware that max() and\nmin() are composable, but that doing this with avg() and getting correct\nresults involves using sum's and count's' in the subqueries and doing a\ndivision in the outer select, but AFAICT that's orthogonal to the\nperformance considerations of whether to do the inner rollups or not).\n\n\n*Q3.* (You saw it coming) What really has me wrapped around the axle is how\nto do the \"most recent\" and \"mode\" rollups efficiently in this scenario ....\nin the examples in Q1 above, in both cases the SQL references the\ndata_tabletwice, which is fine if it's a real table, but in the\npartiioned scenario\nthe data has to be pulled together at run time by the UNION ALL subquery,\nand I'd prefer not to execute it multiple times - I'm not much of a SQL\nwordsmith but I can't see way to write this in plain SQL without repeating\nthe subquery, e.g.\n\ninsert into rollup_table\nselect a, 23, c, '2009-02-23 00:00',\n (select y.value from\n (select z.value, z.ts from *(... union all ...)* z where z.a=x.a and\nz.b=23 and z.c=x.c) y\n order by y.ts desc limit 1)\nfrom *(... union all ...)* x\ngroup by a, c;\n\n*a. *Is there a way to write this in plain SQL without syntactically\nrepeating the \"union all\" subquery, or in a way that will ensure PG only\nruns that subquery once?\n\n*b.* Is the PG query planner smart enough to figure out that the union all\nsubquery will return the same result every time (assuming\nISOLATION_SERIALIZABLE) and so only run it once?\n\n*c.* Other ideas I had were\n (i) temporarily create a VIEW which bridges the two tables, and use one\nof the query structures from Q1 above; or\n (ii) write the results of some kind of subquery to a temporary table, run\na query against that, then drop the temporary table; or\n (iii) pull the data back to the application layer and deal with it there;\nor\n (iv) stored procedure maybe????\nIs there something better?\n**\n\n*Q4.* When partitioning tables for speed / performance reasons, what are\ngood rules of thumb to use to determine how big to make the partitions?\n\n\nP.S. I realized while drafting this email that instead of running a\nbulk insert\n... select statement for each of the 60 values of b, I could group them\naccording to the rollup algorithm being used, and switch the second index to\nbeing just (ts). Since the data is being laid down in time series order, the\nI/O pattern this creates on the table pages should look like a sequential\nscan even though it's index driven.\n\n\nAll advice would be most gratefully appreciated!\n\nCheers\nDave\n\nHi folksI have an application which collects performance stats at time intervals, to which I am retro-fitting a table partitioning scheme in order to improve scalability.The original data is keyed by a 3-ary tuple of strings .... to keep the row size down, in the new data model I'm actually storing 32-bit int's in Postgres. The new schema for each table looks like this:\n(a integer, b integer,\n c integer, ts timestamp without timezone,\n value double precision)with two indexes: (a, b, ts) and (b, ts)If the table had a primary key, it would be (a, b, c, ts) but I don't need one, and I don't want to carry column \"c\" in the index as it isn't necessary ... the number of distinct values of c for any (a,b) pair is small (less than 10). Queries that read the data for graphing are of the form:\n..... where a=<constant> and b=<constant> and ts between <x> and <y>One of the things I need to do periodically is roll up this data, e.g. calculate min / max / average values of a given datum over a period for use in graphing at larger time\nscales. The rollup tables use the same schema as the raw data ones.There are about 60 different values of b, and for each such value there is a exactly one type of rollup. The old code is doing the rollups in Postgres with 60 bulk \"insert into .... select\" statements, hence the need for the second index. \nFor rollups which are instrinsic SQL functions, and only depend on the value column, this is straightforward, e.g. to take the averages:\ninsert into rollup_tableselect a, 17, c, '2009-02-22 19:00', avg(value)\nfrom data_tablewhere b=17and ts between '2009-02-22 18:00' and '2009-02-22 19:00'\ngroup by a, c;So far so good. This gets slightly more tricky if the rollup you want to do isn't a simple SQL function .... I have two such cases, (a) the most recent value and (b) the modal (most common) value. In both cases, I've been doing this using a dependent subquery in the select clause, e.g. for the most recent value:\ninsert into rollup_tableselect a, 23, c, '2009-02-23 00:00',\n (select y.value from (select z.value, z.ts from data_table z where z.a=.a and z.b=x.b and z.c=x.c) y\n order by y.ts desc limit 1)\n\nfrom data_table xwhere b=23group by a, c;\nDue to the second index, this actually performs quite well, but it of course depends on doing a small index scan on the same data_table for each row of the outer select. I have a few questions:Q1. Is this the right place for the subquery, or would it be better to move it to the where clause, e.g.\n\n\n \ninsert into rollup_table\nselect a, 23, c, '2009-02-23 00:00', value\nfrom data_table xwhere b=23 and ts=(select max(y.ts) from data_table y where y.a=x.a and y.b=23 and y.c=x.c)\n\ngroup by a, c;\nor does the answer depend on row stats? Is there a rule of thumb on whether to prefer \"limit 1\" vs. \"max\"? There will be between 20 and 300 rows with different timestamps for each (a,b,c) tuple.\nFor better scalability, I am partitioning these tables by time .... I am not using PG's table inheritance and triggers to do the\npartitioning, but instead dynamically generating the SQL and table names\nin the application code (Java). In most cases, the rollups will still happen from a single source \"data_table\" and I plan to continue using the existing SQL, but I have a few cases where the source \"data_table\" rows may actually come from two adjacent tables. \nFor intrinsic SQL rollup functions like avg / min / max this is pretty easy:insert into rollup_tableselect a, 17, c, '... date ...', max(value) from ( \n select ........ from data_table_1 where b=17 and .......\n\nunion all select ........ from data_table_2 where b=17 and .......\n)group by a, c;\nQ2. Is there any benefit to doing rollups (max() and group by) in the inner queries inside the UNION ALL, or is it a wash? For now I'm expecting each inner select to produce 20-200 rows per unqiue (a,b,c) combination, and to be unioning no more than 2 tables at a time. (I'm aware that max() and min() are composable, but that doing this with avg() and getting correct results involves using sum's and count's' in the subqueries and doing a division in the outer select, but AFAICT that's orthogonal to the performance considerations of whether to do the inner rollups or not).\nQ3. (You saw it coming) What really has me wrapped around the axle is how to do the \"most recent\" and \"mode\" rollups efficiently in this scenario .... in the examples in Q1 above, in both cases the SQL references the data_table twice, which is fine if it's a real table, but in the partiioned scenario the data has to be pulled together at run time by the UNION ALL subquery, and I'd prefer not to execute it multiple times - I'm not much of a SQL wordsmith but I can't see way to write this in plain SQL without repeating the subquery, e.g. \ninsert into rollup_table\nselect a, 23, c, '2009-02-23 00:00',\n (select y.value from\n (select z.value, z.ts from (... union all ...) z where z.a=x.a and z.b=23 and z.c=x.c) y\n\n order by y.ts desc limit 1)\nfrom (... union all ...) x\ngroup by a, c;\na. Is there a way to write this in plain SQL without syntactically repeating the \"union all\" subquery, or in a way that will ensure PG only runs that subquery once?b. Is the PG query planner smart enough to figure out that the union all subquery will return the same result every time (assuming ISOLATION_SERIALIZABLE) and so only run it once?\nc. Other ideas I had were (i) temporarily create a VIEW which bridges the two tables, and use one of the query structures from Q1 above; or \n (ii) write the results of some kind of subquery to a temporary table, run a query against that, then drop the temporary table; or (iii) pull the data back to the application layer and deal with it there; or (iv) stored procedure maybe????\nIs there something better?Q4. When partitioning tables for speed / performance reasons, what are good rules of thumb to use to determine how big to make the partitions? P.S. I realized while drafting this email that instead of running a bulk insert ... select statement for each of the 60 values of b, I could group them according to the rollup algorithm being used, and switch the second index to being just (ts). Since the data is being laid down in time series order, the I/O pattern this creates on the table pages should look like a sequential scan even though it's index driven.\nAll advice would be most gratefully appreciated!\nCheersDave",
"msg_date": "Mon, 22 Feb 2010 21:01:33 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice requested on structuring aggregation queries"
},
{
"msg_contents": "On 02/22/2010 07:01 PM, Dave Crooke wrote:\n> The original data is keyed by a 3-ary tuple of strings .... to keep the\n> row size down, in the new data model I'm actually storing 32-bit int's\n> in Postgres. The new schema for each table looks like this:\n> \n> (a integer,\n> b integer,\n> c integer,\n> ts timestamp without timezone,\n> value double precision)\n> \n> with two indexes: (a, b, ts) and (b, ts)\n\n[...snip...]\n\n> There are about 60 different values of b, and for each such value there\n> is a exactly one type of rollup. The old code is doing the rollups in\n> Postgres with 60 bulk \"insert into .... select\" statements, hence the\n> need for the second index.\n\n[...snip...]\n\n> For better scalability, I am partitioning these tables by time .... I am\n> not using PG's table inheritance and triggers to do the partitioning,\n> but instead dynamically generating the SQL and table names in the\n> application code (Java). In most cases, the rollups will still happen\n> from a single source \"data_table\" and I plan to continue using the\n> existing SQL, but I have a few cases where the source \"data_table\" rows\n> may actually come from two adjacent tables.\n\nWithout going through your very long set of questions in detail, it\nstrikes me that you might be better off if you:\n\n1) use PostgreSQL partitioning (constraint exclusion)\n2) partition by ts range\n3) consider also including b in your partitioning scheme\n4) create one index as (ts, a)\n5) use dynamically generated SQL and table names in the application\n code to create (conditionally) and load the tables\n\nBut of course test both this and your proposed method and compare ;-)\n\nAlso you might consider PL/R for some of your analysis (e.g. mode would\nbe simple, but perhaps not as fast):\n http://www.joeconway.com/web/guest/pl/r\n\nHTH,\n\nJoe",
"msg_date": "Mon, 22 Feb 2010 21:23:25 -0800",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice requested on structuring aggregation queries"
}
] |
[
{
"msg_contents": "Hello:\n\nI'm an ignorant in what refers to performance analysis of PostgreSQL. \nI've a doubt about how the PostgreSQL planner makes a hash join. I've \ntried to \"dig\" into the archive of this mailing list but I haven't found \nwhat I'm looking for. So I'm explaining my doubt with an example to see \nif anyone can help me.\n\nLet's suppose that I've 2 tables, one of students and the other one of \nparents in a many-to-one relation. I want to do something like this:\n\n SELECT s.complete_name, f.complete_name\n FROM students AS s\n JOIN fathers AS f ON f.id_father = s.id_father;\n\nUsing the ANALYZE command, I've checked that the planner firstly scans \nand extracts the required information from \"fathers\", builds a temporary \nhash table from it, then scans \"students\", and finally joins the \ninformation from this table and the temporary one employing the relation \n\"f.id_father = s.id_father\".\n\nMy doubt is about this last step. When the planner checks the temporary \ntable looking for the parent of a student:\n\nA) Does it run through the temporary father's table one time per \nstudent? This means that if there are 500 students, it's doing 500 loops \non the temporary table.\nB) Or does it try to internally group students with the same father ID \nto avoid doing \"absurd\" loops on the temporary one?\n\nThat's all. Thank you very much for your kindness :) .\n\n\n",
"msg_date": "Tue, 23 Feb 2010 15:38:13 +0100",
"msg_from": "negora <[email protected]>",
"msg_from_op": true,
"msg_subject": "Internal operations when the planner makes a hash join."
}
] |
[
{
"msg_contents": "Hello:\n\nI'm an ignorant in what refers to performance analysis of PostgreSQL.\nI've a doubt about how the PostgreSQL planner makes a hash join. I've\ntried to \"dig\" into the archive of this mailing list but I haven't found\nwhat I'm looking for. So I'm explaining my doubt with an example to see\nif anyone can help me.\n\nLet's suppose that I've 2 tables, one of students and the other one of\nparents in a many-to-one relation. I want to do something like this:\n\n SELECT s.complete_name, f.complete_name\n FROM students AS s\n JOIN fathers AS f ON f.id_father = s.id_father;\n\nUsing the ANALYZE command, I've checked that the planner firstly scans\nand extracts the required information from \"fathers\", builds a temporary\nhash table from it, then scans \"students\", and finally joins the\ninformation from this table and the temporary one employing the relation\n\"f.id_father = s.id_father\".\n\nMy doubt is about this last step. When the planner checks the temporary\ntable looking for the parent of a student:\n\nA) Does it run through the temporary father's table one time per\nstudent? This means that if there are 500 students, it's doing 500 loops\non the temporary table.\nB) Or does it try to internally group students with the same father ID\nto avoid doing \"absurd\" loops on the temporary one?\n\nThat's all. Thank you very much for your kindness :) .\n\n\n\n",
"msg_date": "Tue, 23 Feb 2010 15:40:05 +0100",
"msg_from": "negora <[email protected]>",
"msg_from_op": true,
"msg_subject": "Internal operations when the planner makes a hash join."
},
{
"msg_contents": "negora <[email protected]> wrote:\n \n> I've a doubt about how the PostgreSQL planner makes a hash join.\n \n> Let's suppose that I've 2 tables, one of students and the other\n> one of parents in a many-to-one relation. I want to do something\n> like this:\n> \n> SELECT s.complete_name, f.complete_name\n> FROM students AS s\n> JOIN fathers AS f ON f.id_father = s.id_father;\n> \n> Using the ANALYZE command, I've checked that the planner firstly\n> scans and extracts the required information from \"fathers\", builds\n> a temporary hash table from it, then scans \"students\", and finally\n> joins the information from this table and the temporary one\n> employing the relation \"f.id_father = s.id_father\".\n \nThis sort of plan is sometimes used when the optimizer expects the\nhash table to fit into RAM, based on statistics and your work_mem\nsetting. If it does fit, that's one sequential scan of the father\ntable's heap, and a hashed lookup into RAM to find the father to\nmatch each student. For the sort of query you're showing, that's\ntypically a very good plan.\n \n-Kevin\n",
"msg_date": "Tue, 23 Feb 2010 09:30:38 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal operations when the planner makes a\n\t hash join."
},
{
"msg_contents": "\n\n\n\n\nFirst of all, thank you for your fast answer,\nKevin :) .\n\nHowever I still wonder if on the search into the hashed\ntable (stored in the RAM, as you're pointing out), it checks for\nfathers as\nmany times as students are selected, or if the engine uses some kind of\nintelligent heuristic to avoid searching for the same father more than\nonce.\n\nFor example:\n\nstudents\n----------------------------------------\nid_student | name | id_father\n----------------------------------------\n1 | James | 1\n2 | Laura | 2\n3 | Anthony | 1\n\n\nfathers (hashed table into RAM)\n----------------------------------------\nid_father | name\n----------------------------------------\n1 | John\n2 | Michael\n\n\nAccording to how I understood the process, the engine would get the\nname from the student with ID 1 and would look for the name of the\nfather with ID 1 in the hashed table. It'd do exactly the same with the\nstudent #2 and father #2. But my big doubt is about the 3rd one\n(Anthony). Would the engine \"know\" that it already had retrieved the\nfather's name for the student 1 and would avoid searching for it into\nthe hashed table (using some kind of internal mechanism which allows to\n\"re-utilize\" the name)? Or would it search into the hashed table again?\n\nThanks a lot for your patience :) .\n\n\nKevin Grittner wrote:\n\nnegora <[email protected]> wrote:\n \n \n\nI've a doubt about how the PostgreSQL planner makes a hash join.\n \n\n \n \n\nLet's suppose that I've 2 tables, one of students and the other\none of parents in a many-to-one relation. I want to do something\nlike this:\n\n SELECT s.complete_name, f.complete_name\n FROM students AS s\n JOIN fathers AS f ON f.id_father = s.id_father;\n\nUsing the ANALYZE command, I've checked that the planner firstly\nscans and extracts the required information from \"fathers\", builds\na temporary hash table from it, then scans \"students\", and finally\njoins the information from this table and the temporary one\nemploying the relation \"f.id_father = s.id_father\".\n \n\n \nThis sort of plan is sometimes used when the optimizer expects the\nhash table to fit into RAM, based on statistics and your work_mem\nsetting. If it does fit, that's one sequential scan of the father\ntable's heap, and a hashed lookup into RAM to find the father to\nmatch each student. For the sort of query you're showing, that's\ntypically a very good plan.\n \n-Kevin\n\n \n\n\n\n",
"msg_date": "Tue, 23 Feb 2010 17:39:40 +0100",
"msg_from": "negora <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Internal operations when the planner makes a\t hash\n join."
},
{
"msg_contents": "negora wrote:\n\n> According to how I understood the process, the engine would get the\n> name from the student with ID 1 and would look for the name of the\n> father with ID 1 in the hashed table. It'd do exactly the same with the\n> student #2 and father #2. But my big doubt is about the 3rd one\n> (Anthony). Would the engine \"know\" that it already had retrieved the\n> father's name for the student 1 and would avoid searching for it into\n> the hashed table (using some kind of internal mechanism which allows to\n> \"re-utilize\" the name)? Or would it search into the hashed table again?<br>\n\nThe hash table is searched again. But that's fast, because it's a hash\ntable.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 23 Feb 2010 13:53:39 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal operations when the planner makes a hash\n join."
},
{
"msg_contents": "\nOn Feb 23, 2010, at 8:53 AM, Alvaro Herrera wrote:\n\n> negora wrote:\n> \n>> According to how I understood the process, the engine would get the\n>> name from the student with ID 1 and would look for the name of the\n>> father with ID 1 in the hashed table. It'd do exactly the same with the\n>> student #2 and father #2. But my big doubt is about the 3rd one\n>> (Anthony). Would the engine \"know\" that it already had retrieved the\n>> father's name for the student 1 and would avoid searching for it into\n>> the hashed table (using some kind of internal mechanism which allows to\n>> \"re-utilize\" the name)? Or would it search into the hashed table again?<br>\n> \n> The hash table is searched again. But that's fast, because it's a hash\n> table.\n> \n\nTo answer the question another way, \"remembering\" that it has already seen father A once and tracking that would use a hash table to remember that fact. \n\nThe hash table created by the first scan IS the \"remember you have seen this father\" data structure, optimized for fast lookup. So before even looking at the first student, the hash table is built so that it is fast to find out if a father has been seen before, and if so where that father's data is located. Looking this data up is often referred to as a \"probe\" and not a \"scan\" because it takes just as long to do if the hash table has 100 entries or 10000 entries. The drawback is that the whole thing has to fit in RAM.\n\n\n> -- \n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 23 Feb 2010 10:49:36 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal operations when the planner makes a hash\n join."
},
{
"msg_contents": "Thank you for explaining me the internal behaviour of the PostgreSQL \nengine. I'll try to look for more information about that hash tables. It \nsounds really really interesting. Your information was very useful.\n\nThe origin of my doubt resides in the fact that I need to do a joint \nbetween 3 HUGE tables (millions of registries) and do certain operations \nwith the retrieved information. I was deciding whether to use one SELECT \nwith 3 JOINs, as I've been doing since the beginning, or build a \nPL/PgSQL function based on 3 nested \"FOR ... IN SELECT ... LOOP\" \nstructures which tried to minimize the subsequent table searches storing \nintermediate useful data in arrays (curiously, these would act as the \nhash tables which you mention, but in a very very rudimentary way). In a \ncase like this one (possibly unable to fit in RAM), Is also JOIN the \nbest solution?\n\nSince I've to retrieve such a big amount of columns and crossed \nregistries I had started to think that using 1 SELECT with 3 JOINs would \nincrease the number of table searches a LOT and also \"duplicate\" the \ninformation too much. I mean \"duplicate\" as in this case, where the \nFactor 1 appears millions of times for every Element:\n\nElement 1 | Sub-factor 1 | Factor 1\nElement 2 | Subf-actor 1 | Factor 1\n...\nElement 12639747465586 | Sub-factor 1 | Factor 1\nElement 1 | Sub-factor 2 | Factor 1\n\nI hope not to robber you much time but... What do you think about it? Is \nit better either 1 SELECT with 3 JOINs or build nested \"FOR ... IN \nSELECT ... LOOP\" structures? Could it be one of that cases in which I've \nto choose between either higher speed but higher memory consume (3 \nJOINs) or lower speed but less memory expense (3 FORs)?\n\nThanks again and apologizes for extending this topic too much.\n\n\nScott Carey wrote:\n> On Feb 23, 2010, at 8:53 AM, Alvaro Herrera wrote:\n>\n> \n>> negora wrote:\n>>\n>> \n>>> According to how I understood the process, the engine would get the\n>>> name from the student with ID 1 and would look for the name of the\n>>> father with ID 1 in the hashed table. It'd do exactly the same with the\n>>> student #2 and father #2. But my big doubt is about the 3rd one\n>>> (Anthony). Would the engine \"know\" that it already had retrieved the\n>>> father's name for the student 1 and would avoid searching for it into\n>>> the hashed table (using some kind of internal mechanism which allows to\n>>> \"re-utilize\" the name)? Or would it search into the hashed table again?<br>\n>>> \n>> The hash table is searched again. But that's fast, because it's a hash\n>> table.\n>>\n>> \n>\n> To answer the question another way, \"remembering\" that it has already seen father A once and tracking that would use a hash table to remember that fact. \n>\n> The hash table created by the first scan IS the \"remember you have seen this father\" data structure, optimized for fast lookup. So before even looking at the first student, the hash table is built so that it is fast to find out if a father has been seen before, and if so where that father's data is located. Looking this data up is often referred to as a \"probe\" and not a \"scan\" because it takes just as long to do if the hash table has 100 entries or 10000 entries. The drawback is that the whole thing has to fit in RAM.\n>\n>\n> \n>> -- \n>> Alvaro Herrera http://www.CommandPrompt.com/\n>> The PostgreSQL Company - Command Prompt, Inc.\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>\n>\n> \n",
"msg_date": "Tue, 23 Feb 2010 22:33:24 +0100",
"msg_from": "negora <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Internal operations when the planner makes a hash join."
},
{
"msg_contents": "negora <[email protected]> wrote:\n \n> The origin of my doubt resides in the fact that I need to do a\n> joint between 3 HUGE tables (millions of registries) and do\n> certain operations with the retrieved information. I was deciding\n> whether to use one SELECT with 3 JOINs, as I've been doing since\n> the beginning, or build a PL/PgSQL function based on 3 nested \"FOR\n> ... IN SELECT ... LOOP\" structures which tried to minimize the\n> subsequent table searches storing intermediate useful data in\n> arrays\n \nIt's almost always faster (and less error prone) to write one SELECT\nstatement declaring what you want than to try to do better by\nnavigating individual rows procedurally. I would *strongly*\nrecommend you write it with the JOINs and then post here if you have\nany concerns about the performance. In general, try to *declare*\nwhat you want, and let the PostgreSQL planner sort out the best way\nto navigate the tables to produce what you want. If you hit some\nparticular weakness in the planner, you many need to coerce it, but\ncertainly you should not *start* with that.\n \n-Kevin\n",
"msg_date": "Tue, 23 Feb 2010 15:45:21 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal operations when the planner makes a\n\t hash join."
},
{
"msg_contents": "\n\n\n\n\nHello Kevin. I'm going to take and apply your\nadvices, certainly. No more \"crazy\" PL/PgSQLs then. I was worried\nbecause of the possibility that repetition of fields caused some kind\nof memory saturation. But I guess that PostgreSQL takes care of that\nfact properly. I even might return the entire result to my external\nJava application (I was using a similar approach on it too). I just\nhope that the speed of that single SQL compensates the transfer of such\na big mass of data between PostgreSQL and Java in terms of delay.\nThanks ;) .\n\n\n\nKevin Grittner wrote:\n\nnegora <[email protected]> wrote:\n \n \n\nThe origin of my doubt resides in the fact that I need to do a\njoint between 3 HUGE tables (millions of registries) and do\ncertain operations with the retrieved information. I was deciding\nwhether to use one SELECT with 3 JOINs, as I've been doing since\nthe beginning, or build a PL/PgSQL function based on 3 nested \"FOR\n... IN SELECT ... LOOP\" structures which tried to minimize the\nsubsequent table searches storing intermediate useful data in\narrays\n \n\n \nIt's almost always faster (and less error prone) to write one SELECT\nstatement declaring what you want than to try to do better by\nnavigating individual rows procedurally. I would *strongly*\nrecommend you write it with the JOINs and then post here if you have\nany concerns about the performance. In general, try to *declare*\nwhat you want, and let the PostgreSQL planner sort out the best way\nto navigate the tables to produce what you want. If you hit some\nparticular weakness in the planner, you many need to coerce it, but\ncertainly you should not *start* with that.\n \n-Kevin\n\n \n\n\n\n",
"msg_date": "Tue, 23 Feb 2010 23:52:56 +0100",
"msg_from": "negora <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Internal operations when the planner makes a\t hash\n join."
},
{
"msg_contents": ">negora <[email protected]> wrote: \n \n> I even might return the entire result to my external Java\n> application\n \nYou are probably going to want to configure it to use a cursor, at\nleast if the result set is large (i.e., too big to cache the entire\nresult set in memory before you read the first row). Read this over\ncarefully:\n \nhttp://jdbc.postgresql.org/documentation/84/query.html#query-with-cursor\n \nYou don't have to use a Java cursor or do anything procedurally\nthere, but a PostgreSQL cursor is the only way to stream the data to\nthe application on demand (ResultSet.next), rather than pushing it\nall there during the Statement.execute().\n \n-Kevin\n",
"msg_date": "Tue, 23 Feb 2010 17:03:23 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal operations when the planner makes a\t\n\t hash join."
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: negora [mailto:[email protected]] \n> Sent: Tuesday, February 23, 2010 4:33 PM\n> To: Scott Carey\n> Cc: Alvaro Herrera; [email protected]\n> Subject: Re: Internal operations when the planner makes a hash join.\n> \n> Thank you for explaining me the internal behaviour of the \n> PostgreSQL engine. I'll try to look for more information \n> about that hash tables. It sounds really really interesting. \n> Your information was very useful.\n> \n> The origin of my doubt resides in the fact that I need to do \n> a joint between 3 HUGE tables (millions of registries) and do \n> certain operations with the retrieved information. I was \n> deciding whether to use one SELECT with 3 JOINs, as I've been \n> doing since the beginning, or build a PL/PgSQL function based \n> on 3 nested \"FOR ... IN SELECT ... LOOP\" \n> structures which tried to minimize the subsequent table \n> searches storing intermediate useful data in arrays \n> (curiously, these would act as the hash tables which you \n> mention, but in a very very rudimentary way). In a case like \n> this one (possibly unable to fit in RAM), Is also JOIN the \n> best solution?\n> \n> Since I've to retrieve such a big amount of columns and \n> crossed registries I had started to think that using 1 SELECT \n> with 3 JOINs would increase the number of table searches a \n> LOT and also \"duplicate\" the information too much. I mean \n> \"duplicate\" as in this case, where the Factor 1 appears \n> millions of times for every Element:\n> \n> Element 1 | Sub-factor 1 | Factor 1\n> Element 2 | Subf-actor 1 | Factor 1\n> ...\n> Element 12639747465586 | Sub-factor 1 | Factor 1 Element 1 | \n> Sub-factor 2 | Factor 1\n> \n> I hope not to robber you much time but... What do you think \n> about it? Is it better either 1 SELECT with 3 JOINs or build \n> nested \"FOR ... IN SELECT ... LOOP\" structures? Could it be \n> one of that cases in which I've to choose between either \n> higher speed but higher memory consume (3\n> JOINs) or lower speed but less memory expense (3 FORs)?\n> \n> Thanks again and apologizes for extending this topic too much.\n> \n> \n> Scott Carey wrote:\n> > On Feb 23, 2010, at 8:53 AM, Alvaro Herrera wrote:\n> >\n> > \n> >> negora wrote:\n> >>\n> >> \n> >>> According to how I understood the process, the engine \n> would get the \n> >>> name from the student with ID 1 and would look for the \n> name of the \n> >>> father with ID 1 in the hashed table. It'd do exactly the \n> same with \n> >>> the student #2 and father #2. But my big doubt is about \n> the 3rd one \n> >>> (Anthony). Would the engine \"know\" that it already had \n> retrieved the \n> >>> father's name for the student 1 and would avoid searching for it \n> >>> into the hashed table (using some kind of internal \n> mechanism which \n> >>> allows to \"re-utilize\" the name)? Or would it search into \n> the hashed \n> >>> table again?<br>\n> >>> \n> >> The hash table is searched again. But that's fast, because it's a \n> >> hash table.\n> >>\n> >> \n> >\n> > To answer the question another way, \"remembering\" that it \n> has already seen father A once and tracking that would use a \n> hash table to remember that fact. \n> >\n> > The hash table created by the first scan IS the \"remember \n> you have seen this father\" data structure, optimized for fast \n> lookup. So before even looking at the first student, the \n> hash table is built so that it is fast to find out if a \n> father has been seen before, and if so where that father's \n> data is located. Looking this data up is often referred to \n> as a \"probe\" and not a \"scan\" because it takes just as long \n> to do if the hash table has 100 entries or 10000 entries. \n> The drawback is that the whole thing has to fit in RAM.\n> >\n> >\n> > \n> >> -- \n> >> Alvaro Herrera \n> http://www.CommandPrompt.com/\n> >> The PostgreSQL Company - Command Prompt, Inc.\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list \n> >> ([email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >> \n> >\n> >\n> > \n> \n\nSo, you are trying to do \"nested loop\" in PL/PgSQL.\nWhy not let optimizer decide between \"nested loop\" and \"hash join\" based\non your memory settings and statistics collected for objects involved?\nI'm pretty sure, it'll be faster than PL/PgSQL 3 nested loops.\n\nIgor Neyman\n",
"msg_date": "Wed, 24 Feb 2010 09:46:31 -0500",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Internal operations when the planner makes a hash join."
}
] |
[
{
"msg_contents": "Thanks Joe.\n\n1. In my case, I'm erring on the side of not using the limited partitioning\nsupport in PG 8.3, which we're using .... because I'm generating new tables\nall the time, I need to dynamically generate the DML anyway, and it's\nactually less code to just do my own calculation on the application side to\nsee which table name I need to use (as opposed to dynamically creating all\nthe constraints and triggers needed to get PG to do it), and doing almost\neverything in vanilla SQL makes it much easier to port (I'm going to need to\nsupport Oracle within the next 12 months). I have the luxury of not having\nany application code which directly hits the database, it all goes through\none persistence manager class which encapsulates both CRUD operations and\nbulk stats queries, so I don't have to fake the existence of the old table\nwith views or such.\n\n\n2. The idea of partitioning by \"b\", the performance counter type column, is\nan interesting one .... I am definitely going to consider it for a future\nrelease. For now, my new schema is going to end up turning one 300GB table\n(incl indexes) into about 100 tables of sizes ranging from about 0.1GB to\n3.5GB each (with indexes), which feels like an OK size range for both\nmanageability and performance (though I'd still be interested to see what\npeople on this list think). When I get to storing multiple terabytes, having\n6,000 tables is going to look more attractive :-)\n\n\n3. If you were suggesting the (ts, a) index as an alternative to (a, b, ts)\nand (ts) that's an interesting and cunning plan ... I'd need to see how it\nperforms on the queries which extract data series for graphing, which are of\nthe form \" .... where a=2617 and b=4 and ts between '2010-02-22 00:00' and\n'2010-02-25 00:00'\" and which are interactive.\n<testing>\nI tried this and it took quite a bit longer than the version with the\n\"natural\" index (102ms vs 0.5ms both from buffer cache) ... and this is a\ntest data set with only 1,000 pieces of equipment (values of a) and my\ndesign target is 25,000 pieces.\n\n# explain analyse select time_stamp, value from foo where a=1 and b=14 and\ntime_stamp between '2010-02-23 09:00' and '2010-02-23 12:00';\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using foo1 on foo (cost=0.00..21196.67 rows=17 width=16)\n(actual time=0.133..102.571 rows=72 loops=1)\n Index Cond: ((time_stamp >= '2010-02-23 09:00:00'::timestamp without time\nzone) AND (time_stamp <= '2010-02-23 12:00:00'::timestamp without time zone)\nAND (a = 1) AND (b = 14))\n Total runtime: 102.720 ms\n(3 rows)\n\nTime: 103.844 ms\n\n\n\n# explain analyse select time_stamp, value from perf_raw_2010_02_23 where\na=1 and b=14 and time_stamp between '2010-02-23 09:00' and '2010-02-23\n12:00';\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on perf_raw_2010_02_23 (cost=5.37..68.55 rows=16\nwidth=16) (actual time=0.107..0.319 rows=72 loops=1)\n Recheck Cond: ((a = 1) AND (b = 14) AND (time_stamp >= '2010-02-23\n09:00:00'::timestamp without time zone) AND (time_stamp <= '2010-02-23\n12:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on perf_raw_2010_02_23_a (cost=0.00..5.36 rows=16\nwidth=0) (actual time=0.082..0.082 rows=72 loops=1)\n Index Cond: ((a = 1) AND (b = 14) AND (time_stamp >= '2010-02-23\n09:00:00'::timestamp without time zone) AND (time_stamp <= '2010-02-23\n12:00:00'::timestamp without time zone))\n Total runtime: 0.456 ms\n(5 rows)\n\nTime: 1.811 ms\n\n\n\"foo\" is just a copy of \"perf_raw_2010_02_23\" with the different indexing\nthat Joe suggested.\n\n\nI don't know how sophisticated the index scan in PG 8.3 is, i.e. whether it\nwill do a sparse scan skipping btree nodes that can be constraint-excluded\nusing the 2nd and subsequent fields of the index tuple. By the looks of the\nperformance, it looks as if it is just doing a simple scan over all the\nindex \"leaf node\" entries in the range and testing them individually.\n\n*Tom* - is an optimization for this in / in plan for a future release?\n\n\nFor my purposes, I only need the ts-first index for doing data rollups,\nwhich are done at the end of each rollup period ... once I have started a\nnew data table, I don't need that index anymore and I can drop it from all\nthe older tables ... right now I am using 1 table per day for raw data, but\nkeeping 30 days history, so I can drop the secondary index from 29 out of 30\nwhich makes the disk space cost of it modest, and the insertion overhead\nseems to be covered.\n\nLooks like the a-first index is definitely necessary for acceptable\ninteractive performance ... it's powering a Flex UI that graphs multiple\ndata series concurrently.\n\n\n\n4. Apart from query performance, the big benefit I wanted from the sharding\nscheme is to get out of the DELETE and VACUUM business .... now all my data\naging is table drops, which are much, much faster in PG :-)\n\n\n\n\n\nOn Mon, Feb 22, 2010 at 11:23 PM, Joe Conway <[email protected]> wrote:\n\n>\n> Without going through your very long set of questions in detail, it\n> strikes me that you might be better off if you:\n>\n> 1) use PostgreSQL partitioning (constraint exclusion)\n> 2) partition by ts range\n> 3) consider also including b in your partitioning scheme\n> 4) create one index as (ts, a)\n> 5) use dynamically generated SQL and table names in the application\n> code to create (conditionally) and load the tables\n>\n> But of course test both this and your proposed method and compare ;-)\n>\n> Also you might consider PL/R for some of your analysis (e.g. mode would\n> be simple, but perhaps not as fast):\n> http://www.joeconway.com/web/guest/pl/r\n>\n> HTH,\n>\n> Joe\n>\n>\n\nThanks Joe.1. In my case, I'm erring on the side of not using the limited partitioning support in PG 8.3, which we're using .... because I'm generating new tables all the time, I need to dynamically generate the DML anyway, and it's actually less code to just do my own calculation on the application side to see which table name I need to use (as opposed to dynamically creating all the constraints and triggers needed to get PG to do it), and doing almost everything in vanilla SQL makes it much easier to port (I'm going to need to support Oracle within the next 12 months). I have the luxury of not having any application code which directly hits the database, it all goes through one persistence manager class which encapsulates both CRUD operations and bulk stats queries, so I don't have to fake the existence of the old table with views or such.\n2. The idea of partitioning by \"b\", the performance counter type column, is an interesting one .... I am definitely going to consider it for a future release. For now, my new schema is going to end up turning one 300GB table (incl indexes) into about 100 tables of sizes ranging from about 0.1GB to 3.5GB each (with indexes), which feels like an OK size range for both manageability and performance (though I'd still be interested to see what people on this list think). When I get to storing multiple terabytes, having 6,000 tables is going to look more attractive :-)\n3. If you were suggesting the (ts, a) index as an alternative to (a, b, ts) and (ts) that's an interesting and cunning plan ... I'd need to see how it performs on the queries which extract data series for graphing, which are of the form \" .... where a=2617 and b=4 and ts between '2010-02-22 00:00' and '2010-02-25 00:00'\" and which are interactive.\n<testing>I tried this and it took quite a bit longer than the version with the \"natural\" index (102ms vs 0.5ms both from buffer cache) ... and this is a test data set with only 1,000 pieces of equipment (values of a) and my design target is 25,000 pieces. \n# explain analyse select time_stamp, value from foo where a=1 and b=14 and time_stamp between '2010-02-23 09:00' and '2010-02-23 12:00';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using foo1 on foo (cost=0.00..21196.67 rows=17 width=16) (actual time=0.133..102.571 rows=72 loops=1)\n Index Cond: ((time_stamp >= '2010-02-23 09:00:00'::timestamp without time zone) AND (time_stamp <= '2010-02-23 12:00:00'::timestamp without time zone) AND (a = 1) AND (b = 14))\n Total runtime: 102.720 ms(3 rows)\nTime: 103.844 ms\n# explain analyse select time_stamp, value from perf_raw_2010_02_23 where a=1 and b=14 and time_stamp between '2010-02-23 09:00' and '2010-02-23 12:00';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on perf_raw_2010_02_23 (cost=5.37..68.55 rows=16 width=16) (actual time=0.107..0.319 rows=72 loops=1) Recheck Cond: ((a = 1) AND (b = 14) AND (time_stamp >= '2010-02-23 09:00:00'::timestamp without time zone) AND (time_stamp <= '2010-02-23 12:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on perf_raw_2010_02_23_a (cost=0.00..5.36 rows=16 width=0) (actual time=0.082..0.082 rows=72 loops=1) Index Cond: ((a = 1) AND (b = 14) AND (time_stamp >= '2010-02-23 09:00:00'::timestamp without time zone) AND (time_stamp <= '2010-02-23 12:00:00'::timestamp without time zone))\n Total runtime: 0.456 ms(5 rows)\nTime: 1.811 ms\"foo\" is just a copy of \"perf_raw_2010_02_23\" with the different indexing that Joe suggested.I don't know how sophisticated the index scan in PG 8.3 is, i.e. whether it will do a sparse scan skipping btree nodes that can be constraint-excluded using the 2nd and subsequent fields of the index tuple. By the looks of the performance, it looks as if it is just doing a simple scan over all the index \"leaf node\" entries in the range and testing them individually. \nTom - is an optimization for this in / in plan for a future release?For my purposes, I only need the ts-first index for doing data rollups, which are done at the end of each rollup period ... once I have started a new data table, I don't need that index anymore and I can drop it from all the older tables ... right now I am using 1 table per day for raw data, but keeping 30 days history, so I can drop the secondary index from 29 out of 30 which makes the disk space cost of it modest, and the insertion overhead seems to be covered.\nLooks like the a-first index is definitely necessary for acceptable interactive performance ... it's powering a Flex UI that graphs multiple data series concurrently.4. Apart from query performance, the big benefit I wanted from the sharding scheme is to get out of the DELETE and VACUUM business .... now all my data aging is table drops, which are much, much faster in PG :-)\nOn Mon, Feb 22, 2010 at 11:23 PM, Joe Conway <[email protected]> wrote:\n\nWithout going through your very long set of questions in detail, it\nstrikes me that you might be better off if you:\n\n1) use PostgreSQL partitioning (constraint exclusion)\n2) partition by ts range\n3) consider also including b in your partitioning scheme\n4) create one index as (ts, a)\n5) use dynamically generated SQL and table names in the application\n code to create (conditionally) and load the tables\n\nBut of course test both this and your proposed method and compare ;-)\n\nAlso you might consider PL/R for some of your analysis (e.g. mode would\nbe simple, but perhaps not as fast):\n http://www.joeconway.com/web/guest/pl/r\n\nHTH,\n\nJoe",
"msg_date": "Tue, 23 Feb 2010 17:37:04 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thx and additional Q's ....."
}
] |
[
{
"msg_contents": "Hi All;\n\nI have a table that has daily partitions. \n\nThe check constraints look like this:\nCHECK (timezone('EST'::text, insert_dt) >= '2010-01-01'::date \nAND timezone('EST'::text, insert_dt) < '2010-01-02'::date)\n\neach partition has this index:\n \"fact_idx1_20100101_on_cust_id\" btree (cust_id)\n\nIf I run an explain hitting an individual partition I get an index scan:\n\nexplain select distinct cust_id from children.fact_20100101;\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..136891.18 rows=70296 width=38)\n -> Index Scan using fact_idx1_20100101_on_cust_id on fact_20100101 \n(cost=0.00..133112.0\n\n\n\n\n\nHowever the same query against the base table when specifying the check \nconstraint key in the where clause produces sequential scans:\n\n\nexplain select distinct cust_id from fact \nwhere timezone('EST'::text, insert_dt) between '2010-01-01'::date \nand '2010-01-02'::date;\n\n QUERY PLAN \n--------------------------------------------------------------------------------------\n HashAggregate (cost=97671.06..97673.06 rows=200 width=38)\n -> Result (cost=0.00..97638.26 rows=13120 width=38)\n -> Append (cost=0.00..97638.26 rows=13120 width=38)\n -> Seq Scan on fact (cost=0.00..10.60 rows=1 width=98)\n Filter: ((timezone('EST'::text, insert_dt) >= \n'2010-01-01'::date) AND (timezone('EST'::text, insert_dt) <= \n'2010-01-02'::date))\n -> Seq Scan on fact_20100101 fact (cost=0.00..56236.00 \nrows=7558 width=38)\n Filter: ((timezone('EST'::text, insert_dt) >= \n'2010-01-01'::date) AND (timezone('EST'::text, insert_dt) <= \n'2010-01-02'::date))\n -> Seq Scan on fact_20100102 fact (cost=0.00..41391.66 \nrows=5561 width=38)\n Filter: ((timezone('EST'::text, insert_dt) >= \n'2010-01-01'::date) AND (timezone('EST'::text, insert_dt) <= \n'2010-01-02'::date))\n\n\nThoughts?\n\n\nThanks in advance\n\n\n",
"msg_date": "Wed, 24 Feb 2010 07:36:36 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioned tables query not using indexes"
},
{
"msg_contents": "In response to Kevin Kempter :\n> Hi All;\n> \n> I have a table that has daily partitions. \n> \n> The check constraints look like this:\n> CHECK (timezone('EST'::text, insert_dt) >= '2010-01-01'::date \n> AND timezone('EST'::text, insert_dt) < '2010-01-02'::date)\n> \n> each partition has this index:\n> \"fact_idx1_20100101_on_cust_id\" btree (cust_id)\n> \n> If I run an explain hitting an individual partition I get an index scan:\n> \n> explain select distinct cust_id from children.fact_20100101;\n> \n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------\n> Unique (cost=0.00..136891.18 rows=70296 width=38)\n> -> Index Scan using fact_idx1_20100101_on_cust_id on fact_20100101 \n> (cost=0.00..133112.0\n> \n> \n> \n> \n> \n> However the same query against the base table when specifying the check \n> constraint key in the where clause produces sequential scans:\n\nHave you set constraint_exclusion = on?\n\n\n\n> \n> \n> explain select distinct cust_id from fact \n> where timezone('EST'::text, insert_dt) between '2010-01-01'::date \n> and '2010-01-02'::date;\n\nCan you show the table definition? I'm not sure about the\ntimezone()-function and index...\n\nMaybe you should try to rewrite your code to:\n\nbetween '2010-01-01 00:00'::timestamp and ...\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Wed, 24 Feb 2010 15:55:36 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned tables query not using indexes"
},
{
"msg_contents": "On Wednesday 24 February 2010 07:55:36 A. Kretschmer wrote:\n> In response to Kevin Kempter :\n> > Hi All;\n> >\n> > I have a table that has daily partitions.\n> >\n> > The check constraints look like this:\n> > CHECK (timezone('EST'::text, insert_dt) >= '2010-01-01'::date\n> > AND timezone('EST'::text, insert_dt) < '2010-01-02'::date)\n> >\n> > each partition has this index:\n> > \"fact_idx1_20100101_on_cust_id\" btree (cust_id)\n> >\n> > If I run an explain hitting an individual partition I get an index scan:\n> >\n> > explain select distinct cust_id from children.fact_20100101;\n> >\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >------------------------------------- Unique (cost=0.00..136891.18\n> > rows=70296 width=38)\n> > -> Index Scan using fact_idx1_20100101_on_cust_id on fact_20100101\n> > (cost=0.00..133112.0\n> >\n> >\n> >\n> >\n> >\n> > However the same query against the base table when specifying the check\n> > constraint key in the where clause produces sequential scans:\n> \n> Have you set constraint_exclusion = on?\n\nYes.\n\n> \n> > explain select distinct cust_id from fact\n> > where timezone('EST'::text, insert_dt) between '2010-01-01'::date\n> > and '2010-01-02'::date;\n> \n> Can you show the table definition? I'm not sure about the\n> timezone()-function and index...\n\n Table \"fact_20100101\"\n Column | Type | Modifiers\n-----------------------------+---------------------------------------+-----------\n insert_dt | timestamp with time zone |\n cust_order_id | integer | \n user_row_id | integer |\n cust_id | character varying(40) |\n order_items | integer | \n catalog_id | integer | \n online_order_id_num | character varying(255) | \n order_id | integer | \n promotion_key | integer | \n sales_region_id | integer | \n country_id | integer |\nIndexes:\n index_fact_20100101_on_insert_dt btree (insert_dt)\n index_fact_20100101_on_catalog_id btree (catalog_id)\n index_fact_20100101_on_promotion_key btree (promotion_key)\n index_fact_20100101_on_order_id btree (order_id)\n index_fact_20100101_on_cust_order_id btree (cust_order_id)\n index_fact_20100101_on_user_row_id btree (user_row_id)\n index_fact_20100101_on_cust_id btree (cust_id)\nCheck constraints:\n fact_20100101_insert_dt_check CHECK (timezone('EST'::text, insert_dt) >= \n'2010-01-01'::date \n AND timezone('EST'::text, insert_dt) < '2010-01-02'::date)\nForeign-key constraints:\n fk_country_id\" FOREIGN KEY (country_id) REFERENCES country_dim(id)\n fk_catalog_id\" FOREIGN KEY (catalog_id) REFERENCES catalog_dim(id)\n fk_promotion_key\" FOREIGN KEY (promotion_key) REFERENCES promotion_dim(id)\n fk_order_id\" FOREIGN KEY (order_id) REFERENCES order_dim(id)\nInherits: fact\n\n\n\n\n\n\n> \n> Maybe you should try to rewrite your code to:\n> \n> between '2010-01-01 00:00'::timestamp and ...\nThis (and other date variations gives me index scans however each time I get \nthe planner to do an index scan it also refuses to do partition exclusion. The \noriginal query above gives me partition exclusion but table scans (no index \nscans)\n\n\n\n> \n> \n> Andreas\n> \n",
"msg_date": "Wed, 24 Feb 2010 08:18:41 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioned tables query not using indexes"
},
{
"msg_contents": "\n> However the same query against the base table when specifying the check \n> constraint key in the where clause produces sequential scans:\n\nDoes the \"master\" table have the same indexes as the slave partitions?\n\n--Josh Berkus\n",
"msg_date": "Sun, 28 Feb 2010 12:29:14 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned tables query not using indexes"
},
{
"msg_contents": "On Sun, Feb 28, 2010 at 12:29:14PM -0800, Josh Berkus wrote:\n> \n> > However the same query against the base table when specifying the check \n> > constraint key in the where clause produces sequential scans:\n> \n> Does the \"master\" table have the same indexes as the slave partitions?\n> \n> --Josh Berkus\n> \nDoes this help? I have an empty base table without indexes and partitions\nunderneath that do have the index. I did not think that an index on the\nparent table did anything.\n\nCheers,\nKen\n",
"msg_date": "Sun, 28 Feb 2010 15:51:57 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned tables query not using indexes"
},
{
"msg_contents": "On 2/28/10 1:51 PM, Kenneth Marshall wrote:\n> On Sun, Feb 28, 2010 at 12:29:14PM -0800, Josh Berkus wrote:\n>>> However the same query against the base table when specifying the check \n>>> constraint key in the where clause produces sequential scans:\n>> Does the \"master\" table have the same indexes as the slave partitions?\n>>\n>> --Josh Berkus\n>>\n> Does this help? I have an empty base table without indexes and partitions\n> underneath that do have the index. I did not think that an index on the\n> parent table did anything.\n\nI'm not sure that it does, but \"try it and see\" is easier than reading\nthe planner code.\n\n--Josh Berkus\n",
"msg_date": "Sun, 28 Feb 2010 14:46:52 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned tables query not using indexes"
}
] |
[
{
"msg_contents": "This is a generic SQL issue and not PG specific, but I'd like to get\nan opinion from this list.\n\nConsider the following data:\n\n# \\d bar\n Table \"public.bar\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n city | character varying(255) |\n temp | integer |\n date | timestamp without time zone |\n\n# select * from bar order by city, date;\n city | temp | date\n-----------+------+---------------------\n Austin | 75 | 2010-02-21 15:00:00\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 56 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(5 rows)\n\nIf you want the highest recorded temperature for a city, that's easy\nto do, since the selection criteria works on the same column that we\nare extracing:\n\n# select city, max(temp) from bar group by city order by 1;\n city | max\n-----------+-----\n Austin | 75\n Edinburgh | 42\n New York | 78\n(3 rows)\n\n\nHowever there is (AFAIK) no simple way in plain SQL to write a query\nthat performs such an aggregation where the aggregation criteria is on\none column and you want to return another, e.g. adding the the *date\nof* that highest temperature to the output above, or doing a query to\nget the most recent temperature reading for each city.\n\nWhat I'd like to do is something like the below (and I'm inventing\nmock syntax here, the following is not valid SQL):\n\n-- Ugly implicit syntax but no worse than an Oracle outer join ;-)\nselect city, temp, date from bar where date=max(date) group by city,\ntemp order by city;\n\nor perhaps\n\n-- More explicit\nselect aggregate_using(max(date), city, temp, date) from bar group by\ncity, temp order by city;\n\nBoth of the above, if they existed, would be a single data access\nfollowed by and sort-merge.\n\nThe only way I know how to do it involves doing two accesses to the data, e.g.\n\n# select city, temp, date from bar a where date=(select max(b.date)\nfrom bar b where a.city=b.city) order by 1;\n city | temp | date\n-----------+------+---------------------\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(3 rows)\n\n\n# explain select * from bar a where date=(select max(b.date) from bar\nb where a.city=b.city) order by 1;\n QUERY PLAN\n--------------------------------------------------------------------------\n Sort (cost=1658.86..1658.87 rows=1 width=528)\n Sort Key: a.city\n -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n Filter: (date = (subplan))\n SubPlan\n -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n -> Seq Scan on bar b (cost=0.00..11.75 rows=1\nwidth=8) -- would be an index lookup in a real scenario\n Filter: (($0)::text = (city)::text)\n(8 rows)\n",
"msg_date": "Wed, 24 Feb 2010 15:31:16 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "This looks to be a perfect use for SELECT DISTINCT ON:\n\nSELECT DISTINCT ON (city)\n* FROM bar\nORDER BY city, temp desc\n\nOr am I misunderstanding the issue?\n\nGarrett Murphy\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Dave Crooke\nSent: Wednesday, February 24, 2010 2:31 PM\nTo: pgsql-performance\nSubject: [PERFORM] Extracting superlatives - SQL design philosophy\n\nThis is a generic SQL issue and not PG specific, but I'd like to get\nan opinion from this list.\n\nConsider the following data:\n\n# \\d bar\n Table \"public.bar\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n city | character varying(255) |\n temp | integer |\n date | timestamp without time zone |\n\n# select * from bar order by city, date;\n city | temp | date\n-----------+------+---------------------\n Austin | 75 | 2010-02-21 15:00:00\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 56 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(5 rows)\n\nIf you want the highest recorded temperature for a city, that's easy\nto do, since the selection criteria works on the same column that we\nare extracing:\n\n# select city, max(temp) from bar group by city order by 1;\n city | max\n-----------+-----\n Austin | 75\n Edinburgh | 42\n New York | 78\n(3 rows)\n\n\nHowever there is (AFAIK) no simple way in plain SQL to write a query\nthat performs such an aggregation where the aggregation criteria is on\none column and you want to return another, e.g. adding the the *date\nof* that highest temperature to the output above, or doing a query to\nget the most recent temperature reading for each city.\n\nWhat I'd like to do is something like the below (and I'm inventing\nmock syntax here, the following is not valid SQL):\n\n-- Ugly implicit syntax but no worse than an Oracle outer join ;-)\nselect city, temp, date from bar where date=max(date) group by city,\ntemp order by city;\n\nor perhaps\n\n-- More explicit\nselect aggregate_using(max(date), city, temp, date) from bar group by\ncity, temp order by city;\n\nBoth of the above, if they existed, would be a single data access\nfollowed by and sort-merge.\n\nThe only way I know how to do it involves doing two accesses to the data, e.g.\n\n# select city, temp, date from bar a where date=(select max(b.date)\nfrom bar b where a.city=b.city) order by 1;\n city | temp | date\n-----------+------+---------------------\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(3 rows)\n\n\n# explain select * from bar a where date=(select max(b.date) from bar\nb where a.city=b.city) order by 1;\n QUERY PLAN\n--------------------------------------------------------------------------\n Sort (cost=1658.86..1658.87 rows=1 width=528)\n Sort Key: a.city\n -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n Filter: (date = (subplan))\n SubPlan\n -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n -> Seq Scan on bar b (cost=0.00..11.75 rows=1\nwidth=8) -- would be an index lookup in a real scenario\n Filter: (($0)::text = (city)::text)\n(8 rows)\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Feb 2010 16:43:20 -0500",
"msg_from": "\"Garrett Murphy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "Dave Crooke wrote:\n> This is a generic SQL issue and not PG specific, but I'd like to get\n> an opinion from this list.\n> \n> Consider the following data:\n> \n> # \\d bar\n> Table \"public.bar\"\n> Column | Type | Modifiers\n> --------+-----------------------------+-----------\n> city | character varying(255) |\n> temp | integer |\n> date | timestamp without time zone |\n> \n> # select * from bar order by city, date;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 75 | 2010-02-21 15:00:00\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 56 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (5 rows)\n> \n> If you want the highest recorded temperature for a city, that's easy\n> to do, since the selection criteria works on the same column that we\n> are extracing:\n> \n> # select city, max(temp) from bar group by city order by 1;\n> city | max\n> -----------+-----\n> Austin | 75\n> Edinburgh | 42\n> New York | 78\n> (3 rows)\n> \n> \n> However there is (AFAIK) no simple way in plain SQL to write a query\n> that performs such an aggregation where the aggregation criteria is on\n> one column and you want to return another, e.g. adding the the *date\n> of* that highest temperature to the output above, or doing a query to\n> get the most recent temperature reading for each city.\n\nIf you add a unique-id column to your table that's filled in from a sequence, it becomes easy:\n\n select city, temp, date from bar where id in\n (select id from bar where ... whatever you like ...);\n\nCraig\n",
"msg_date": "Wed, 24 Feb 2010 13:48:21 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "Can you try using window functions?\n\nSomething like this:\n\nselect distinct\n city,\n first_value(temp) over w as max_temp,\n first_value(date) over w as max_temp_date\n from\n cities\n window w as (partition by city order by temp desc)\n\nhttp://www.postgresql.org/docs/current/static/tutorial-window.html\n\n- Mose\n\nOn Wed, Feb 24, 2010 at 1:31 PM, Dave Crooke <[email protected]> wrote:\n\n> This is a generic SQL issue and not PG specific, but I'd like to get\n> an opinion from this list.\n>\n> Consider the following data:\n>\n> # \\d bar\n> Table \"public.bar\"\n> Column | Type | Modifiers\n> --------+-----------------------------+-----------\n> city | character varying(255) |\n> temp | integer |\n> date | timestamp without time zone |\n>\n> # select * from bar order by city, date;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 75 | 2010-02-21 15:00:00\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 56 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (5 rows)\n>\n> If you want the highest recorded temperature for a city, that's easy\n> to do, since the selection criteria works on the same column that we\n> are extracing:\n>\n> # select city, max(temp) from bar group by city order by 1;\n> city | max\n> -----------+-----\n> Austin | 75\n> Edinburgh | 42\n> New York | 78\n> (3 rows)\n>\n>\n> However there is (AFAIK) no simple way in plain SQL to write a query\n> that performs such an aggregation where the aggregation criteria is on\n> one column and you want to return another, e.g. adding the the *date\n> of* that highest temperature to the output above, or doing a query to\n> get the most recent temperature reading for each city.\n>\n> What I'd like to do is something like the below (and I'm inventing\n> mock syntax here, the following is not valid SQL):\n>\n> -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> select city, temp, date from bar where date=max(date) group by city,\n> temp order by city;\n>\n> or perhaps\n>\n> -- More explicit\n> select aggregate_using(max(date), city, temp, date) from bar group by\n> city, temp order by city;\n>\n> Both of the above, if they existed, would be a single data access\n> followed by and sort-merge.\n>\n> The only way I know how to do it involves doing two accesses to the data,\n> e.g.\n>\n> # select city, temp, date from bar a where date=(select max(b.date)\n> from bar b where a.city=b.city) order by 1;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (3 rows)\n>\n>\n> # explain select * from bar a where date=(select max(b.date) from bar\n> b where a.city=b.city) order by 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Sort (cost=1658.86..1658.87 rows=1 width=528)\n> Sort Key: a.city\n> -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> Filter: (date = (subplan))\n> SubPlan\n> -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> width=8) -- would be an index lookup in a real scenario\n> Filter: (($0)::text = (city)::text)\n> (8 rows)\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nCan you try using window functions?Something like this:select distinct city, first_value(temp) over w as max_temp, first_value(date) over w as max_temp_date\n from cities window w as (partition by city order by temp desc)http://www.postgresql.org/docs/current/static/tutorial-window.html\n- MoseOn Wed, Feb 24, 2010 at 1:31 PM, Dave Crooke <[email protected]> wrote:\nThis is a generic SQL issue and not PG specific, but I'd like to get\nan opinion from this list.\n\nConsider the following data:\n\n# \\d bar\n Table \"public.bar\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n city | character varying(255) |\n temp | integer |\n date | timestamp without time zone |\n\n# select * from bar order by city, date;\n city | temp | date\n-----------+------+---------------------\n Austin | 75 | 2010-02-21 15:00:00\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 56 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(5 rows)\n\nIf you want the highest recorded temperature for a city, that's easy\nto do, since the selection criteria works on the same column that we\nare extracing:\n\n# select city, max(temp) from bar group by city order by 1;\n city | max\n-----------+-----\n Austin | 75\n Edinburgh | 42\n New York | 78\n(3 rows)\n\n\nHowever there is (AFAIK) no simple way in plain SQL to write a query\nthat performs such an aggregation where the aggregation criteria is on\none column and you want to return another, e.g. adding the the *date\nof* that highest temperature to the output above, or doing a query to\nget the most recent temperature reading for each city.\n\nWhat I'd like to do is something like the below (and I'm inventing\nmock syntax here, the following is not valid SQL):\n\n-- Ugly implicit syntax but no worse than an Oracle outer join ;-)\nselect city, temp, date from bar where date=max(date) group by city,\ntemp order by city;\n\nor perhaps\n\n-- More explicit\nselect aggregate_using(max(date), city, temp, date) from bar group by\ncity, temp order by city;\n\nBoth of the above, if they existed, would be a single data access\nfollowed by and sort-merge.\n\nThe only way I know how to do it involves doing two accesses to the data, e.g.\n\n# select city, temp, date from bar a where date=(select max(b.date)\nfrom bar b where a.city=b.city) order by 1;\n city | temp | date\n-----------+------+---------------------\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(3 rows)\n\n\n# explain select * from bar a where date=(select max(b.date) from bar\nb where a.city=b.city) order by 1;\n QUERY PLAN\n--------------------------------------------------------------------------\n Sort (cost=1658.86..1658.87 rows=1 width=528)\n Sort Key: a.city\n -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n Filter: (date = (subplan))\n SubPlan\n -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n -> Seq Scan on bar b (cost=0.00..11.75 rows=1\nwidth=8) -- would be an index lookup in a real scenario\n Filter: (($0)::text = (city)::text)\n(8 rows)\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 24 Feb 2010 13:50:24 -0800",
"msg_from": "Mose <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "You could do:\n\nselect \n\tB.City,\n\tMaxCityTemp.Temp,\n\tmin(B.Date) as FirstMaxDate\nfrom bar b \n\tINNER JOIN (select city,max(temp) as Temp from Bar group by City) as\nMaxCityTemp\n\tON B.City=MaxCityTemp.City\nGroup by\n\tB.City,\n\tMaxCityTemp.Temp\n\t\nGeorge Sexton\nMH Software, Inc.\nhttp://www.mhsoftware.com/\nVoice: 303 438 9585\n \n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Dave Crooke\n> Sent: Wednesday, February 24, 2010 2:31 PM\n> To: pgsql-performance\n> Subject: [PERFORM] Extracting superlatives - SQL design philosophy\n> \n> This is a generic SQL issue and not PG specific, but I'd like to get\n> an opinion from this list.\n> \n> Consider the following data:\n> \n> # \\d bar\n> Table \"public.bar\"\n> Column | Type | Modifiers\n> --------+-----------------------------+-----------\n> city | character varying(255) |\n> temp | integer |\n> date | timestamp without time zone |\n> \n> # select * from bar order by city, date;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 75 | 2010-02-21 15:00:00\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 56 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (5 rows)\n> \n> If you want the highest recorded temperature for a city, that's easy\n> to do, since the selection criteria works on the same column that we\n> are extracing:\n> \n> # select city, max(temp) from bar group by city order by 1;\n> city | max\n> -----------+-----\n> Austin | 75\n> Edinburgh | 42\n> New York | 78\n> (3 rows)\n> \n> \n> However there is (AFAIK) no simple way in plain SQL to write a query\n> that performs such an aggregation where the aggregation criteria is on\n> one column and you want to return another, e.g. adding the the *date\n> of* that highest temperature to the output above, or doing a query to\n> get the most recent temperature reading for each city.\n> \n> What I'd like to do is something like the below (and I'm inventing\n> mock syntax here, the following is not valid SQL):\n> \n> -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> select city, temp, date from bar where date=max(date) group by city,\n> temp order by city;\n> \n> or perhaps\n> \n> -- More explicit\n> select aggregate_using(max(date), city, temp, date) from bar group by\n> city, temp order by city;\n> \n> Both of the above, if they existed, would be a single data access\n> followed by and sort-merge.\n> \n> The only way I know how to do it involves doing two accesses to the\n> data, e.g.\n> \n> # select city, temp, date from bar a where date=(select max(b.date)\n> from bar b where a.city=b.city) order by 1;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (3 rows)\n> \n> \n> # explain select * from bar a where date=(select max(b.date) from bar\n> b where a.city=b.city) order by 1;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> ---\n> Sort (cost=1658.86..1658.87 rows=1 width=528)\n> Sort Key: a.city\n> -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> Filter: (date = (subplan))\n> SubPlan\n> -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> width=8) -- would be an index lookup in a real scenario\n> Filter: (($0)::text = (city)::text)\n> (8 rows)\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Wed, 24 Feb 2010 14:57:47 -0700",
"msg_from": "\"George Sexton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "I missed something:\n\nselect \n\tB.City,\n\tMaxCityTemp.Temp,\n\tmin(B.Date) as FirstMaxDate\nfrom bar b \n\tINNER JOIN (select city,max(temp) as Temp from Bar group by City) as\nMaxCityTemp\n\tON B.City=MaxCityTemp.City AND B.Temp=MaxCityTemp.Temp\nGroup by\n\tB.City,\n\tMaxCityTemp.Temp\n\nGeorge Sexton\nMH Software, Inc.\nhttp://www.mhsoftware.com/\nVoice: 303 438 9585\n \n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of George Sexton\n> Sent: Wednesday, February 24, 2010 2:58 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Extracting superlatives - SQL design philosophy\n> \n> You could do:\n> \n> select\n> \tB.City,\n> \tMaxCityTemp.Temp,\n> \tmin(B.Date) as FirstMaxDate\n> from bar b\n> \tINNER JOIN (select city,max(temp) as Temp from Bar group by City)\n> as\n> MaxCityTemp\n> \tON B.City=MaxCityTemp.City\n> Group by\n> \tB.City,\n> \tMaxCityTemp.Temp\n> \n> George Sexton\n> MH Software, Inc.\n> http://www.mhsoftware.com/\n> Voice: 303 438 9585\n> \n> \n> > -----Original Message-----\n> > From: [email protected] [mailto:pgsql-\n> performance-\n> > [email protected]] On Behalf Of Dave Crooke\n> > Sent: Wednesday, February 24, 2010 2:31 PM\n> > To: pgsql-performance\n> > Subject: [PERFORM] Extracting superlatives - SQL design philosophy\n> >\n> > This is a generic SQL issue and not PG specific, but I'd like to get\n> > an opinion from this list.\n> >\n> > Consider the following data:\n> >\n> > # \\d bar\n> > Table \"public.bar\"\n> > Column | Type | Modifiers\n> > --------+-----------------------------+-----------\n> > city | character varying(255) |\n> > temp | integer |\n> > date | timestamp without time zone |\n> >\n> > # select * from bar order by city, date;\n> > city | temp | date\n> > -----------+------+---------------------\n> > Austin | 75 | 2010-02-21 15:00:00\n> > Austin | 35 | 2010-02-23 15:00:00\n> > Edinburgh | 42 | 2010-02-23 15:00:00\n> > New York | 56 | 2010-02-23 15:00:00\n> > New York | 78 | 2010-06-23 15:00:00\n> > (5 rows)\n> >\n> > If you want the highest recorded temperature for a city, that's easy\n> > to do, since the selection criteria works on the same column that we\n> > are extracing:\n> >\n> > # select city, max(temp) from bar group by city order by 1;\n> > city | max\n> > -----------+-----\n> > Austin | 75\n> > Edinburgh | 42\n> > New York | 78\n> > (3 rows)\n> >\n> >\n> > However there is (AFAIK) no simple way in plain SQL to write a query\n> > that performs such an aggregation where the aggregation criteria is\n> on\n> > one column and you want to return another, e.g. adding the the *date\n> > of* that highest temperature to the output above, or doing a query to\n> > get the most recent temperature reading for each city.\n> >\n> > What I'd like to do is something like the below (and I'm inventing\n> > mock syntax here, the following is not valid SQL):\n> >\n> > -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> > select city, temp, date from bar where date=max(date) group by city,\n> > temp order by city;\n> >\n> > or perhaps\n> >\n> > -- More explicit\n> > select aggregate_using(max(date), city, temp, date) from bar group by\n> > city, temp order by city;\n> >\n> > Both of the above, if they existed, would be a single data access\n> > followed by and sort-merge.\n> >\n> > The only way I know how to do it involves doing two accesses to the\n> > data, e.g.\n> >\n> > # select city, temp, date from bar a where date=(select max(b.date)\n> > from bar b where a.city=b.city) order by 1;\n> > city | temp | date\n> > -----------+------+---------------------\n> > Austin | 35 | 2010-02-23 15:00:00\n> > Edinburgh | 42 | 2010-02-23 15:00:00\n> > New York | 78 | 2010-06-23 15:00:00\n> > (3 rows)\n> >\n> >\n> > # explain select * from bar a where date=(select max(b.date) from bar\n> > b where a.city=b.city) order by 1;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------\n> --\n> > ---\n> > Sort (cost=1658.86..1658.87 rows=1 width=528)\n> > Sort Key: a.city\n> > -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> > Filter: (date = (subplan))\n> > SubPlan\n> > -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> > -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> > width=8) -- would be an index lookup in a real scenario\n> > Filter: (($0)::text = (city)::text)\n> > (8 rows)\n> >\n> > --\n> > Sent via pgsql-performance mailing list (pgsql-\n> > [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Wed, 24 Feb 2010 15:00:09 -0700",
"msg_from": "\"George Sexton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "Garrett's is the best answer from the list .... the only fly in the\nointment here is that it performs a full sort of the records, which\nisn't strictly necessary to the required output. This is functionally\nequivalent to what I came up with for a MODE (most common value)\naggregation, but the syntax is a bit neater.\n\nCraig / Geroge - there are lots of ways to do this with a subquery or\njoin back against the bar table. My goal was actually to avoid this\ndual lookup and join, not for this contrived example but for my\nreal-world case where the \"bar\" table is not actually a table but is a\nrecord set generated on the fly with a non-trivial subquery which I\ndon't want to repeat.\n\nMose - I think I get what you're aiming at here, but your query as\nstated returns a syntax error.\n\nWhat I'd like to have is a way in SQL to get an execution plan which\nmatches Java algorithm below, which just does one \"table scan\" down\nthe data and aggregates in place, i.e. it's O(n) with row lookups:\n\n HashMap<String,Row> winners = new HashMap<String,Row>();\n\n for (Row r : rows) {\n Row oldrow = winners.get(r.city);\n if (oldrow == null || r.temp > oldrow.temp) winnders.put(r.city, r);\n };\n for (String city : winners.keySet()) System.out.println(winners.get(city));\n\nI'd imagine it would be possible to have a query planner optimization\nthat would convert Garrett's DISTINCT ON syntax to do what I was\ntrying to, by realizing that DISTINCT ON X ... ORDER BY Y DESC is\ngoing to return the the one row for each X which has the highest value\nof Y, and so use a MAX-structured accumulation instead of a sort.\n\nCheers\nDave\n\nOn Wed, Feb 24, 2010 at 3:43 PM, Garrett Murphy <[email protected]> wrote:\n> This looks to be a perfect use for SELECT DISTINCT ON:\n>\n> SELECT DISTINCT ON (city)\n> * FROM bar\n> ORDER BY city, temp desc\n>\n> Or am I misunderstanding the issue?\n>\n> Garrett Murphy\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Dave Crooke\n> Sent: Wednesday, February 24, 2010 2:31 PM\n> To: pgsql-performance\n> Subject: [PERFORM] Extracting superlatives - SQL design philosophy\n>\n> This is a generic SQL issue and not PG specific, but I'd like to get\n> an opinion from this list.\n>\n> Consider the following data:\n>\n> # \\d bar\n> Table \"public.bar\"\n> Column | Type | Modifiers\n> --------+-----------------------------+-----------\n> city | character varying(255) |\n> temp | integer |\n> date | timestamp without time zone |\n>\n> # select * from bar order by city, date;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 75 | 2010-02-21 15:00:00\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 56 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (5 rows)\n>\n> If you want the highest recorded temperature for a city, that's easy\n> to do, since the selection criteria works on the same column that we\n> are extracing:\n>\n> # select city, max(temp) from bar group by city order by 1;\n> city | max\n> -----------+-----\n> Austin | 75\n> Edinburgh | 42\n> New York | 78\n> (3 rows)\n>\n>\n> However there is (AFAIK) no simple way in plain SQL to write a query\n> that performs such an aggregation where the aggregation criteria is on\n> one column and you want to return another, e.g. adding the the *date\n> of* that highest temperature to the output above, or doing a query to\n> get the most recent temperature reading for each city.\n>\n> What I'd like to do is something like the below (and I'm inventing\n> mock syntax here, the following is not valid SQL):\n>\n> -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> select city, temp, date from bar where date=max(date) group by city,\n> temp order by city;\n>\n> or perhaps\n>\n> -- More explicit\n> select aggregate_using(max(date), city, temp, date) from bar group by\n> city, temp order by city;\n>\n> Both of the above, if they existed, would be a single data access\n> followed by and sort-merge.\n>\n> The only way I know how to do it involves doing two accesses to the data, e.g.\n>\n> # select city, temp, date from bar a where date=(select max(b.date)\n> from bar b where a.city=b.city) order by 1;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (3 rows)\n>\n>\n> # explain select * from bar a where date=(select max(b.date) from bar\n> b where a.city=b.city) order by 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Sort (cost=1658.86..1658.87 rows=1 width=528)\n> Sort Key: a.city\n> -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> Filter: (date = (subplan))\n> SubPlan\n> -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> width=8) -- would be an index lookup in a real scenario\n> Filter: (($0)::text = (city)::text)\n> (8 rows)\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 24 Feb 2010 16:47:54 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "On 24/02/10 22:47, Dave Crooke wrote:\n> I'd imagine it would be possible to have a query planner optimization\n> that would convert Garrett's DISTINCT ON syntax to do what I was\n> trying to, by realizing that DISTINCT ON X ... ORDER BY Y DESC is\n> going to return the the one row for each X which has the highest value\n> of Y, and so use a MAX-structured accumulation instead of a sort.\n\nWhy is there only one row? For city temperatures, that seems unlikely.\n\nIn the event of more than one row does your algorithm give repeatable \nresults?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 24 Feb 2010 23:12:25 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "1. The city temps table is a toy example, not meant to be realistic :-)\n\n2. Yes, my (Java) algorithm is deterministic ... it will return\nexactly one row per city, and that will be the row (or strictly, *a*\nrow) containing the highest temp. Temp value ties will break in favour\nof earlier rows in Guinness Book of Records tradition :-) It's\nequivalent to a HashAggregate implementation.\n\n\nThe following two query plans (from my real schema) illustrate the\nitch I am trying to scratch .... I want the functionality of the 2nd\none, but with the execution plan structure of the first:\n\n# explain analyse select a, max(b) from perf_raw_2010_02_23 group by a;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=117953.09..117961.07 rows=639 width=8) (actual\ntime=10861.845..10863.008 rows=1023 loops=1)\n -> Seq Scan on perf_raw_2010_02_23 (cost=0.00..91572.39\nrows=5276139 width=8) (actual time=0.038..4459.222 rows=5276139\nloops=1)\n Total runtime: 10863.856 ms\n(3 rows)\n\nTime: 10864.817 ms\n# explain analyse select distinct on (a) * from perf_raw_2010_02_23\norder by a, b desc ;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1059395.04..1085775.73 rows=639 width=28) (actual\ntime=46011.204..58428.210 rows=1023 loops=1)\n -> Sort (cost=1059395.04..1072585.39 rows=5276139 width=28)\n(actual time=46011.200..53561.112 rows=5276139 loops=1)\n Sort Key: a, b\n Sort Method: external merge Disk: 247584kB\n-- actually OS RAM buffers\n -> Seq Scan on perf_raw_2010_02_23 (cost=0.00..91572.39\nrows=5276139 width=28) (actual time=0.047..6491.036 rows=5276139\nloops=1)\n Total runtime: 58516.185 ms\n(6 rows)\n\nTime: 58517.233 ms\n\nThe only difference between these two is that the second query returns\nthe whole row. The *ratio* in cost between these two plans increases\nin proportion to log(n) of the table size ... at 5.5m rows its\nlivable, at 500m it's probably not :-!\n\nCheers\nDave\n\nOn Wed, Feb 24, 2010 at 5:12 PM, Richard Huxton <[email protected]> wrote:\n> On 24/02/10 22:47, Dave Crooke wrote:\n>>\n>> I'd imagine it would be possible to have a query planner optimization\n>> that would convert Garrett's DISTINCT ON syntax to do what I was\n>> trying to, by realizing that DISTINCT ON X ... ORDER BY Y DESC is\n>> going to return the the one row for each X which has the highest value\n>> of Y, and so use a MAX-structured accumulation instead of a sort.\n>\n> Why is there only one row? For city temperatures, that seems unlikely.\n>\n> In the event of more than one row does your algorithm give repeatable\n> results?\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n",
"msg_date": "Wed, 24 Feb 2010 17:37:11 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "On 24/02/10 23:37, Dave Crooke wrote:\n> 1. The city temps table is a toy example, not meant to be realistic :-)\n\nYou knew that and I guessed it, but it's worth stating these things for \npeople who read the archives a year from now.\n\n> 2. Yes, my (Java) algorithm is deterministic ... it will return\n> exactly one row per city, and that will be the row (or strictly, *a*\n> row) containing the highest temp. Temp value ties will break in favour\n> of earlier rows in Guinness Book of Records tradition :-) It's\n> equivalent to a HashAggregate implementation.\n\nBut not when you add in other columns (which is what you're trying to do).\n\n> The following two query plans (from my real schema) illustrate the\n> itch I am trying to scratch .... I want the functionality of the 2nd\n> one, but with the execution plan structure of the first:\n>\n> # explain analyse select a, max(b) from perf_raw_2010_02_23 group by a;\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=117953.09..117961.07 rows=639 width=8) (actual\n> time=10861.845..10863.008 rows=1023 loops=1)\n> -> Seq Scan on perf_raw_2010_02_23 (cost=0.00..91572.39\n> rows=5276139 width=8) (actual time=0.038..4459.222 rows=5276139\n> loops=1)\n> Total runtime: 10863.856 ms\n> (3 rows)\n>\n> Time: 10864.817 ms\n> # explain analyse select distinct on (a) * from perf_raw_2010_02_23\n> order by a, b desc ;\n\nOne big bit of the cost difference is going to be the ordering you need \nto get a repeatable result.\n\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=1059395.04..1085775.73 rows=639 width=28) (actual\n> time=46011.204..58428.210 rows=1023 loops=1)\n> -> Sort (cost=1059395.04..1072585.39 rows=5276139 width=28)\n> (actual time=46011.200..53561.112 rows=5276139 loops=1)\n> Sort Key: a, b\n> Sort Method: external merge Disk: 247584kB\n> -- actually OS RAM buffers\n\nEven if the sort never actually reaches a physical disk, you should \nstill see an increase by increasing sort_mem for the duration of the one \nquery. It's not going to be the magnitude you want, but probably worth \ndoing.\n\n> -> Seq Scan on perf_raw_2010_02_23 (cost=0.00..91572.39\n> rows=5276139 width=28) (actual time=0.047..6491.036 rows=5276139\n> loops=1)\n> Total runtime: 58516.185 ms\n> (6 rows)\n>\n> Time: 58517.233 ms\n>\n> The only difference between these two is that the second query returns\n> the whole row. The *ratio* in cost between these two plans increases\n> in proportion to log(n) of the table size ... at 5.5m rows its\n> livable, at 500m it's probably not :-!\n\nIf performance on this query is vital to you, and the table doesn't \nchange after initial population (which I'm guessing is true) then try an \nindex on (a asc, b desc) and CLUSTER that index. Depending on the ratio \nof distinct a:b values that could be what you're after.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 25 Feb 2010 08:10:37 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "\n> -- More explicit\n> select aggregate_using(max(date), city, temp, date) from bar group by\n> city, temp order by city;\n\nselect city, max(ROW(temp, date)) from bar group by city;\n\nDoes not work (alas) for lack of a default comparison for record type.\n\nAnother solution, which works wonders if you've got the list of cities in \na separate table, and an index on (city, temp) is this :\n\nSELECT c.city, (SELECT ROW( t.date, t.temp ) FROM cities_temp t WHERE \nt.city=c.city ORDER BY temp DESC LIMIT 1) FROM cities;\n\nThis will do a nested loop index scan and it is the fastest way, except if \nyou have very few rows per city.\nThe syntax is ugly and you have to extract the stuff from the ROW() \nafterwards, though.\n\nUnfortunately, this does not work :\n\nSELECT c.city, (SELECT t.date, t.temp FROM cities_temp t WHERE \nt.city=c.city ORDER BY temp DESC LIMIT 1) AS m FROM cities;\n\nbecause the subselect isn't allowed to return more than 1 column.\n\nNote that you can also get the usually annoying top-N by category to use \nthe index by doing something like :\n\nSELECT c.city, (SELECT array_agg(date) FROM (SELECT t.date FROM \ncities_temp t WHERE t.city=c.city ORDER BY temp DESC LIMIT 5)) AS m FROM \ncities;\n\nThe results aren't in a very usable form either, but :\n\nCREATE INDEX ti ON annonces( type_id, price ) WHERE price IS NOT NULL;\n\nEXPLAIN ANALYZE SELECT\nt.id, (SELECT ROW(a.id, a.price, a.date_annonce)\n\tFROM annonces a\n\tWHERE a.type_id = t.id AND price IS NOT NULL\n\tORDER BY price DESC LIMIT 1)\n FROM types_bien t;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on types_bien t (cost=0.00..196.09 rows=57 width=4) (actual \ntime=0.025..0.511 rows=57 loops=1)\n SubPlan 1\n -> Limit (cost=0.00..3.41 rows=1 width=16) (actual \ntime=0.008..0.008 rows=1 loops=57)\n -> Index Scan Backward using ti on annonces a \n(cost=0.00..8845.65 rows=2592 width=16) (actual time=0.007..0.007 rows=1 \nloops=57)\n Index Cond: (type_id = $0)\n Total runtime: 0.551 ms\n\nexplain analyze\nselect distinct type_id, first_value(price) over w as max_price\n from annonces where price is not null\nwindow w as (partition by type_id order by price desc);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=30515.41..30626.87 rows=11146 width=10) (actual \ntime=320.927..320.971 rows=46 loops=1)\n -> WindowAgg (cost=27729.14..29958.16 rows=111451 width=10) (actual \ntime=195.289..282.150 rows=111289 loops=1)\n -> Sort (cost=27729.14..28007.76 rows=111451 width=10) (actual \ntime=195.278..210.762 rows=111289 loops=1)\n Sort Key: type_id, price\n Sort Method: quicksort Memory: 8289kB\n -> Seq Scan on annonces (cost=0.00..18386.17 rows=111451 \nwidth=10) (actual time=0.009..72.589 rows=111289 loops=1)\n Filter: (price IS NOT NULL)\n Total runtime: 322.382 ms\n\nHere using the index is 600x faster... worth a bit of ugly SQL, you decide.\n\nBy disabling seq_scan and bitmapscan, you can corecr this plan :\n\nEXPLAIN ANALYZE SELECT DISTINCT ON (type_id) type_id, date_annonce, price \n FROM annonces WHERE price IS NOT NULL ORDER BY type_id, price LIMIT 40;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..78757.61 rows=33 width=14) (actual time=0.021..145.509 \nrows=40 loops=1)\n -> Unique (cost=0.00..78757.61 rows=33 width=14) (actual \ntime=0.021..145.498 rows=40 loops=1)\n -> Index Scan using ti on annonces (cost=0.00..78478.99 \nrows=111451 width=14) (actual time=0.018..132.671 rows=110796 loops=1)\n Total runtime: 145.549 ms\n\nThis plan would be very bad (unless the whole table is in RAM) because I \nguess the index scan isn't aware of the DISTINCT ON, so it scans all rows \nin the index and in the table.\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 25 Feb 2010 09:35:27 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "Hi all,\n\n \n\nJust for your information, and this is not related to PG directly:\n\nTeradata provides a qualify syntax which works as a filtering condition on\na windowed function result. This is the only DB allowing this direct\nfiltering on windowed functions, from what I know.\n\nSo, as an example, the query you ask for becomes very easy on this database:\n\nselect \n\ncity, temp, date \n\nfrom bar \n\nqualify row_number() over (partition by city order by temp desc)=1\n\n \n\nThis is very practical indeed (you can mix it also with classical\nwhere/having/group by syntaxes).\n\nOn postgres, you may get the same result using an inner query (sorry, I\ncant test it for now) such as:\n\nselect \n\ncity, temp, date \n\nfrom\n\n(select city, temp, date, row_number() over (partition by city order by temp\ndesc) as nr \n\nfrom bar ) a1\n\nwhere nr=1\n\n \n\nJulien Theulier\n\n \n\nDe : [email protected]\n[mailto:[email protected]] De la part de Mose\nEnvoyé : mercredi 24 février 2010 22:50\nÀ : Dave Crooke\nCc : pgsql-performance\nObjet : Re: [PERFORM] Extracting superlatives - SQL design philosophy\n\n \n\nCan you try using window functions?\n\n \n\nSomething like this:\n\n \n\nselect distinct\n\n city,\n\n first_value(temp) over w as max_temp,\n\n first_value(date) over w as max_temp_date\n\n from\n\n cities\n\n window w as (partition by city order by temp desc)\n\n \n\n <http://www.postgresql.org/docs/current/static/tutorial-window.html>\nhttp://www.postgresql.org/docs/current/static/tutorial-window.html\n\n \n\n- Mose\n\n \n\nOn Wed, Feb 24, 2010 at 1:31 PM, Dave Crooke < <mailto:[email protected]>\[email protected]> wrote:\n\nThis is a generic SQL issue and not PG specific, but I'd like to get\nan opinion from this list.\n\nConsider the following data:\n\n# \\d bar\n Table \"public.bar\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n city | character varying(255) |\n temp | integer |\n date | timestamp without time zone |\n\n# select * from bar order by city, date;\n city | temp | date\n-----------+------+---------------------\n Austin | 75 | 2010-02-21 15:00:00\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 56 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(5 rows)\n\nIf you want the highest recorded temperature for a city, that's easy\nto do, since the selection criteria works on the same column that we\nare extracing:\n\n# select city, max(temp) from bar group by city order by 1;\n city | max\n-----------+-----\n Austin | 75\n Edinburgh | 42\n New York | 78\n(3 rows)\n\n\nHowever there is (AFAIK) no simple way in plain SQL to write a query\nthat performs such an aggregation where the aggregation criteria is on\none column and you want to return another, e.g. adding the the *date\nof* that highest temperature to the output above, or doing a query to\nget the most recent temperature reading for each city.\n\nWhat I'd like to do is something like the below (and I'm inventing\nmock syntax here, the following is not valid SQL):\n\n-- Ugly implicit syntax but no worse than an Oracle outer join ;-)\nselect city, temp, date from bar where date=max(date) group by city,\ntemp order by city;\n\nor perhaps\n\n-- More explicit\nselect aggregate_using(max(date), city, temp, date) from bar group by\ncity, temp order by city;\n\nBoth of the above, if they existed, would be a single data access\nfollowed by and sort-merge.\n\nThe only way I know how to do it involves doing two accesses to the data,\ne.g.\n\n# select city, temp, date from bar a where date=(select max(b.date)\nfrom bar b where a.city=b.city) order by 1;\n city | temp | date\n-----------+------+---------------------\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(3 rows)\n\n\n# explain select * from bar a where date=(select max(b.date) from bar\nb where a.city=b.city) order by 1;\n QUERY PLAN\n--------------------------------------------------------------------------\n Sort (cost=1658.86..1658.87 rows=1 width=528)\n Sort Key: a.city\n -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n Filter: (date = (subplan))\n SubPlan\n -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n -> Seq Scan on bar b (cost=0.00..11.75 rows=1\nwidth=8) -- would be an index lookup in a real scenario\n Filter: (($0)::text = (city)::text)\n(8 rows)\n\n--\nSent via pgsql-performance mailing list (\n<mailto:[email protected]> [email protected])\nTo make changes to your subscription:\n <http://www.postgresql.org/mailpref/pgsql-performance>\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nJust for your information, and this is not related to PG\ndirectly:\nTeradata provides a “qualify” syntax which works as\na filtering condition on a windowed function result. This is the only DB\nallowing this direct filtering on windowed functions, from what I know.\nSo, as an example, the query you ask for becomes very easy on\nthis database:\nselect \ncity,\ntemp, date \nfrom bar \nqualify\nrow_number() over (partition by city order by temp desc)=1\n \nThis is very practical indeed (you can mix it also with\nclassical where/having/group by syntaxes).\nOn postgres, you may get the same result using an inner query\n(sorry, I can’t test it for now) such as:\nselect \ncity,\ntemp, date \nfrom\n(select\ncity, temp, date, row_number() over (partition by city order by temp desc) as\nnr \nfrom\nbar ) a1\nwhere nr=1\n \nJulien Theulier\n \n\nDe :\[email protected]\n[mailto:[email protected]] De la part de Mose\nEnvoyé : mercredi 24 février 2010 22:50\nÀ : Dave Crooke\nCc : pgsql-performance\nObjet : Re: [PERFORM] Extracting superlatives - SQL design\nphilosophy\n\n \nCan you try using window functions?\n\n \n\n\n\nSomething like this:\n\n\n \n\n\nselect distinct\n\n\n city,\n\n\n first_value(temp) over w as\nmax_temp,\n\n\n first_value(date) over w as\nmax_temp_date\n\n\n from\n\n\n cities\n\n\n window w as (partition by city\norder by temp desc)\n\n\n \n\n\nhttp://www.postgresql.org/docs/current/static/tutorial-window.html\n\n\n \n\n\n- Mose\n\n \n\nOn Wed, Feb 24, 2010 at 1:31 PM, Dave\nCrooke <[email protected]> wrote:\nThis is a generic SQL issue and not PG\nspecific, but I'd like to get\nan opinion from this list.\n\nConsider the following data:\n\n# \\d bar\n \nTable \"public.bar\"\n Column\n| \nType |\nModifiers\n--------+-----------------------------+-----------\n city | character varying(255) |\n temp |\ninteger \n|\n date | timestamp without time zone |\n\n# select * from bar order by city, date;\n city | temp | date\n-----------+------+---------------------\n Austin | 75 | 2010-02-21 15:00:00\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 56 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(5 rows)\n\nIf you want the highest recorded temperature for a city, that's easy\nto do, since the selection criteria works on the same column that we\nare extracing:\n\n# select city, max(temp) from bar group by city order by 1;\n city | max\n-----------+-----\n Austin | 75\n Edinburgh | 42\n New York | 78\n(3 rows)\n\n\nHowever there is (AFAIK) no simple way in plain SQL to write a query\nthat performs such an aggregation where the aggregation criteria is on\none column and you want to return another, e.g. adding the the *date\nof* that highest temperature to the output above, or doing a query to\nget the most recent temperature reading for each city.\n\nWhat I'd like to do is something like the below (and I'm inventing\nmock syntax here, the following is not valid SQL):\n\n-- Ugly implicit syntax but no worse than an Oracle outer join ;-)\nselect city, temp, date from bar where date=max(date) group by city,\ntemp order by city;\n\nor perhaps\n\n-- More explicit\nselect aggregate_using(max(date), city, temp, date) from bar group by\ncity, temp order by city;\n\nBoth of the above, if they existed, would be a single data access\nfollowed by and sort-merge.\n\nThe only way I know how to do it involves doing two accesses to the data, e.g.\n\n# select city, temp, date from bar a where date=(select max(b.date)\nfrom bar b where a.city=b.city) order by 1;\n city | temp | date\n-----------+------+---------------------\n Austin | 35 | 2010-02-23 15:00:00\n Edinburgh | 42 | 2010-02-23 15:00:00\n New York | 78 | 2010-06-23 15:00:00\n(3 rows)\n\n\n# explain select * from bar a where date=(select max(b.date) from bar\nb where a.city=b.city) order by 1;\n \n QUERY PLAN\n--------------------------------------------------------------------------\n Sort (cost=1658.86..1658.87 rows=1 width=528)\n Sort Key: a.city\n -> Seq Scan on bar a (cost=0.00..1658.85 rows=1\nwidth=528)\n Filter: (date = (subplan))\n SubPlan\n -> Aggregate\n (cost=11.76..11.77 rows=1 width=8)\n -> Seq Scan on\nbar b (cost=0.00..11.75 rows=1\nwidth=8) -- would be an index lookup in a real scenario\n \nFilter: (($0)::text = (city)::text)\n(8 rows)\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 25 Feb 2010 10:20:45 +0100",
"msg_from": "\"Julien Theulier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "\"Julien Theulier\" <[email protected]> writes:\n> Teradata provides a �qualify� syntax which works as a filtering condition on\n> a windowed function result. This is the only DB allowing this direct\n> filtering on windowed functions, from what I know.\n\nSeems like you could easily translate that into SQL-standard syntax by\nadding a level of sub-select:\n\n\tselect ... from (select *, window_function wf from ...) ss\n\twhere wf=1;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 25 Feb 2010 09:42:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy "
},
{
"msg_contents": "On Wed, Feb 24, 2010 at 4:31 PM, Dave Crooke <[email protected]> wrote:\n> This is a generic SQL issue and not PG specific, but I'd like to get\n> an opinion from this list.\n>\n> Consider the following data:\n>\n> # \\d bar\n> Table \"public.bar\"\n> Column | Type | Modifiers\n> --------+-----------------------------+-----------\n> city | character varying(255) |\n> temp | integer |\n> date | timestamp without time zone |\n>\n> # select * from bar order by city, date;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 75 | 2010-02-21 15:00:00\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 56 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (5 rows)\n>\n> If you want the highest recorded temperature for a city, that's easy\n> to do, since the selection criteria works on the same column that we\n> are extracing:\n>\n> # select city, max(temp) from bar group by city order by 1;\n> city | max\n> -----------+-----\n> Austin | 75\n> Edinburgh | 42\n> New York | 78\n> (3 rows)\n>\n>\n> However there is (AFAIK) no simple way in plain SQL to write a query\n> that performs such an aggregation where the aggregation criteria is on\n> one column and you want to return another, e.g. adding the the *date\n> of* that highest temperature to the output above, or doing a query to\n> get the most recent temperature reading for each city.\n>\n> What I'd like to do is something like the below (and I'm inventing\n> mock syntax here, the following is not valid SQL):\n>\n> -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> select city, temp, date from bar where date=max(date) group by city,\n> temp order by city;\n>\n> or perhaps\n>\n> -- More explicit\n> select aggregate_using(max(date), city, temp, date) from bar group by\n> city, temp order by city;\n>\n> Both of the above, if they existed, would be a single data access\n> followed by and sort-merge.\n>\n> The only way I know how to do it involves doing two accesses to the data, e.g.\n>\n> # select city, temp, date from bar a where date=(select max(b.date)\n> from bar b where a.city=b.city) order by 1;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (3 rows)\n>\n>\n> # explain select * from bar a where date=(select max(b.date) from bar\n> b where a.city=b.city) order by 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Sort (cost=1658.86..1658.87 rows=1 width=528)\n> Sort Key: a.city\n> -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> Filter: (date = (subplan))\n> SubPlan\n> -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> width=8) -- would be an index lookup in a real scenario\n> Filter: (($0)::text = (city)::text)\n\nAnother cool way to do this is via a custom aggregate:\ncreate function maxdata(data, data) returns data as\n$$\n select case when ($1).date > ($2).date then $1 else $2 end;\n$$ language sql;\n\ncreate aggregate maxdata(data)\n(\n sfunc=maxdata,\n stype=data\n);\n\nselect (d).* from\n(\n select maxdata(data) as d from data group by city\n);\n\nIt does it in a single pass. Where this approach can pay dividends is\nwhen you have a very complicated 'max'-ing criteria to justify the\nverbosity of creating the aggregate. If you are not doing the whole\ntable, the self join is often faster. I'm surprised custom aggregates\naren't used more...they seem very clean and neat to me.\n\nmerlin\n",
"msg_date": "Tue, 9 Mar 2010 07:46:27 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
},
{
"msg_contents": "Cool trick .... I didn't realise you could do this at the SQL level without\na custom max() written in C.\n\n\nWhat I ended up doing for my app is just going with straight SQL that\ngenerates the \"key\" tuples with a SELECT DISTINCT, and then has a dependent\nsubquery that does a very small index scan to pull the data for each row (I\ncare somewhat about portability). In order to make this perform, I created a\nsecond index on the raw data table that has the columns tupled in the order\nI need for this rollup, which allows PG to do a fairly efficient index range\nscan.\n\nI had been trying to avoid using the disk space to carry this 2nd index,\nsince it is only needed for the bulk rollup, and I then reliased I only have\nto keep it on the current day's partition, and I can drop it once that\npartition's data has been aggregated (the insert overhead of the index isn't\nas much of a concern).\n\nAlternatively, I could have lived without the index by sharding the raw data\nright down to the rollup intervals, which would mean that rollups are\neffective as a full table scan anyway, but I didn't want to do that as it\nwould make real-time data extration queries slower if they had to go across\n10-20 tables.\n\n\nThanks everyone for the insights\n\nCheers\nDave\n\nOn Tue, Mar 9, 2010 at 6:46 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Feb 24, 2010 at 4:31 PM, Dave Crooke <[email protected]> wrote:\n> > This is a generic SQL issue and not PG specific, but I'd like to get\n> > an opinion from this list.\n> >\n> > Consider the following data:\n> >\n> > # \\d bar\n> > Table \"public.bar\"\n> > Column | Type | Modifiers\n> > --------+-----------------------------+-----------\n> > city | character varying(255) |\n> > temp | integer |\n> > date | timestamp without time zone |\n> >\n> > # select * from bar order by city, date;\n> > city | temp | date\n> > -----------+------+---------------------\n> > Austin | 75 | 2010-02-21 15:00:00\n> > Austin | 35 | 2010-02-23 15:00:00\n> > Edinburgh | 42 | 2010-02-23 15:00:00\n> > New York | 56 | 2010-02-23 15:00:00\n> > New York | 78 | 2010-06-23 15:00:00\n> > (5 rows)\n> >\n> > If you want the highest recorded temperature for a city, that's easy\n> > to do, since the selection criteria works on the same column that we\n> > are extracing:\n> >\n> > # select city, max(temp) from bar group by city order by 1;\n> > city | max\n> > -----------+-----\n> > Austin | 75\n> > Edinburgh | 42\n> > New York | 78\n> > (3 rows)\n> >\n> >\n> > However there is (AFAIK) no simple way in plain SQL to write a query\n> > that performs such an aggregation where the aggregation criteria is on\n> > one column and you want to return another, e.g. adding the the *date\n> > of* that highest temperature to the output above, or doing a query to\n> > get the most recent temperature reading for each city.\n> >\n> > What I'd like to do is something like the below (and I'm inventing\n> > mock syntax here, the following is not valid SQL):\n> >\n> > -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> > select city, temp, date from bar where date=max(date) group by city,\n> > temp order by city;\n> >\n> > or perhaps\n> >\n> > -- More explicit\n> > select aggregate_using(max(date), city, temp, date) from bar group by\n> > city, temp order by city;\n> >\n> > Both of the above, if they existed, would be a single data access\n> > followed by and sort-merge.\n> >\n> > The only way I know how to do it involves doing two accesses to the data,\n> e.g.\n> >\n> > # select city, temp, date from bar a where date=(select max(b.date)\n> > from bar b where a.city=b.city) order by 1;\n> > city | temp | date\n> > -----------+------+---------------------\n> > Austin | 35 | 2010-02-23 15:00:00\n> > Edinburgh | 42 | 2010-02-23 15:00:00\n> > New York | 78 | 2010-06-23 15:00:00\n> > (3 rows)\n> >\n> >\n> > # explain select * from bar a where date=(select max(b.date) from bar\n> > b where a.city=b.city) order by 1;\n> > QUERY PLAN\n> >\n> --------------------------------------------------------------------------\n> > Sort (cost=1658.86..1658.87 rows=1 width=528)\n> > Sort Key: a.city\n> > -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> > Filter: (date = (subplan))\n> > SubPlan\n> > -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> > -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> > width=8) -- would be an index lookup in a real scenario\n> > Filter: (($0)::text = (city)::text)\n>\n> Another cool way to do this is via a custom aggregate:\n> create function maxdata(data, data) returns data as\n> $$\n> select case when ($1).date > ($2).date then $1 else $2 end;\n> $$ language sql;\n>\n> create aggregate maxdata(data)\n> (\n> sfunc=maxdata,\n> stype=data\n> );\n>\n> select (d).* from\n> (\n> select maxdata(data) as d from data group by city\n> );\n>\n> It does it in a single pass. Where this approach can pay dividends is\n> when you have a very complicated 'max'-ing criteria to justify the\n> verbosity of creating the aggregate. If you are not doing the whole\n> table, the self join is often faster. I'm surprised custom aggregates\n> aren't used more...they seem very clean and neat to me.\n>\n> merlin\n>\n\nCool trick .... I didn't realise you could do this at the SQL level without a custom max() written in C.What I ended up doing for my app is just going with straight SQL that generates the \"key\" tuples with a SELECT DISTINCT, and then has a dependent subquery that does a very small index scan to pull the data for each row (I care somewhat about portability). In order to make this perform, I created a second index on the raw data table that has the columns tupled in the order I need for this rollup, which allows PG to do a fairly efficient index range scan.\nI had been trying to avoid using the disk space to carry this 2nd index, since it is only needed for the bulk rollup, and I then reliased I only have to keep it on the current day's partition, and I can drop it once that partition's data has been aggregated (the insert overhead of the index isn't as much of a concern).\nAlternatively, I could have lived without the index by sharding the raw data right down to the rollup intervals, which would mean that rollups are effective as a full table scan anyway, but I didn't want to do that as it would make real-time data extration queries slower if they had to go across 10-20 tables.\nThanks everyone for the insightsCheersDaveOn Tue, Mar 9, 2010 at 6:46 AM, Merlin Moncure <[email protected]> wrote:\nOn Wed, Feb 24, 2010 at 4:31 PM, Dave Crooke <[email protected]> wrote:\n> This is a generic SQL issue and not PG specific, but I'd like to get\n> an opinion from this list.\n>\n> Consider the following data:\n>\n> # \\d bar\n> Table \"public.bar\"\n> Column | Type | Modifiers\n> --------+-----------------------------+-----------\n> city | character varying(255) |\n> temp | integer |\n> date | timestamp without time zone |\n>\n> # select * from bar order by city, date;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 75 | 2010-02-21 15:00:00\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 56 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (5 rows)\n>\n> If you want the highest recorded temperature for a city, that's easy\n> to do, since the selection criteria works on the same column that we\n> are extracing:\n>\n> # select city, max(temp) from bar group by city order by 1;\n> city | max\n> -----------+-----\n> Austin | 75\n> Edinburgh | 42\n> New York | 78\n> (3 rows)\n>\n>\n> However there is (AFAIK) no simple way in plain SQL to write a query\n> that performs such an aggregation where the aggregation criteria is on\n> one column and you want to return another, e.g. adding the the *date\n> of* that highest temperature to the output above, or doing a query to\n> get the most recent temperature reading for each city.\n>\n> What I'd like to do is something like the below (and I'm inventing\n> mock syntax here, the following is not valid SQL):\n>\n> -- Ugly implicit syntax but no worse than an Oracle outer join ;-)\n> select city, temp, date from bar where date=max(date) group by city,\n> temp order by city;\n>\n> or perhaps\n>\n> -- More explicit\n> select aggregate_using(max(date), city, temp, date) from bar group by\n> city, temp order by city;\n>\n> Both of the above, if they existed, would be a single data access\n> followed by and sort-merge.\n>\n> The only way I know how to do it involves doing two accesses to the data, e.g.\n>\n> # select city, temp, date from bar a where date=(select max(b.date)\n> from bar b where a.city=b.city) order by 1;\n> city | temp | date\n> -----------+------+---------------------\n> Austin | 35 | 2010-02-23 15:00:00\n> Edinburgh | 42 | 2010-02-23 15:00:00\n> New York | 78 | 2010-06-23 15:00:00\n> (3 rows)\n>\n>\n> # explain select * from bar a where date=(select max(b.date) from bar\n> b where a.city=b.city) order by 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Sort (cost=1658.86..1658.87 rows=1 width=528)\n> Sort Key: a.city\n> -> Seq Scan on bar a (cost=0.00..1658.85 rows=1 width=528)\n> Filter: (date = (subplan))\n> SubPlan\n> -> Aggregate (cost=11.76..11.77 rows=1 width=8)\n> -> Seq Scan on bar b (cost=0.00..11.75 rows=1\n> width=8) -- would be an index lookup in a real scenario\n> Filter: (($0)::text = (city)::text)\n\nAnother cool way to do this is via a custom aggregate:\ncreate function maxdata(data, data) returns data as\n$$\n select case when ($1).date > ($2).date then $1 else $2 end;\n$$ language sql;\n\ncreate aggregate maxdata(data)\n(\n sfunc=maxdata,\n stype=data\n);\n\nselect (d).* from\n(\n select maxdata(data) as d from data group by city\n);\n\nIt does it in a single pass. Where this approach can pay dividends is\nwhen you have a very complicated 'max'-ing criteria to justify the\nverbosity of creating the aggregate. If you are not doing the whole\ntable, the self join is often faster. I'm surprised custom aggregates\naren't used more...they seem very clean and neat to me.\n\nmerlin",
"msg_date": "Tue, 9 Mar 2010 10:13:30 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extracting superlatives - SQL design philosophy"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a couple of questions about dbt2 performance.\n\n1. I tested dbt2+postgresql 8.4.2 on my server, but the NOTPM is around only\n320~390 with 10 connections and 30 warehouses. Increasing the number of\nconnections did not improve the throughput? The NOPTM number does not seem\nvery high to me. Should I try more configurations to see if it can be\nimproved? Are there any numbers I can compare with (NOPTM and response\ntime)?\n\n2. Moreover, the disk utilization was high and the \"await\" time from iostat\nis around 500 ms. Could disk I/O limit the overall throughput? The server\nhas 2 SATA disks, one for system and postgresql and the other is dedicated\nto logging (pg_xlog). As far as I understand, modern database systems should\nbe CPU-bound rather than I/O-bound, is it because I did not perform adequate\nperformance tuning?\n\n3. From \"vmstat\", the cpus spent around 72% of time idle, 25% waiting for\nI/O, and only 2~3% left doing real work. I was surprised that the cpu\nutilization was so low. Is that normal or could it be due to\nmisconfiguration? In my opinion, even if disk I/O may have been stressed,\n70% of idle time was still too high.\n\n\nBelow are some specs/configurations that I used. Any suggestion is welcome.\nThanks!\n\nserver spec:\n4 cores (2*Dual-Core AMD Opteron, 800MHz), 12GB ram\n2 SATA disks, one for system and postgresql and the other is dedicated to\nlogging (pg_xlog)\n\npostgres configuration:\n30 warehouses\n256MB shared_buffer\n768MB effective_cache_size\ncheckpoint_timeout 1hr (All my tests are within 10 minutes interval, so\ncheckpointing should not interfere the performance)\nI turned off fsync to see whether the performance could be improved.\n\nYu-Ju\n\nHi,I have a couple of questions about dbt2 performance.1.\nI tested dbt2+postgresql 8.4.2 on my server, but the NOTPM is around\nonly 320~390 with 10 connections and 30 warehouses. Increasing the\nnumber of connections did not improve the throughput? The NOPTM number\ndoes not seem very high to me. Should I try more configurations to see\nif it can be improved? Are there any numbers I can compare with (NOPTM\nand response time)?\n2. Moreover, the disk utilization was high and the \"await\" time\nfrom iostat is around 500 ms. Could disk I/O limit the overall\nthroughput? The server has 2 SATA disks, one for system and postgresql\nand the other is dedicated to logging (pg_xlog). As far as I\nunderstand, modern database systems should be CPU-bound rather than\nI/O-bound, is it because I did not perform adequate performance tuning?\n\n3. From \"vmstat\", the cpus spent around 72% of time idle, 25%\nwaiting for I/O, and only 2~3% left doing real work. I was surprised\nthat the cpu utilization was so low. Is that normal or could it be due\nto misconfiguration? In my opinion, even if disk I/O may have been\nstressed, 70% of idle time was still too high.\nBelow are some specs/configurations that I used. Any suggestion is welcome. Thanks!server spec:4 cores (2*Dual-Core AMD Opteron, 800MHz), 12GB ram2 SATA disks, one for system and postgresql and the other is dedicated to logging (pg_xlog)\npostgres configuration:30 warehouses256MB shared_buffer768MB effective_cache_sizecheckpoint_timeout 1hr (All my tests are within 10 minutes interval, so checkpointing should not interfere the performance)\n\nI turned off fsync to see whether the performance could be improved.Yu-Ju",
"msg_date": "Thu, 25 Feb 2010 16:23:53 -0500",
"msg_from": "Yu-Ju Hong <[email protected]>",
"msg_from_op": true,
"msg_subject": "dbt2 performance"
},
{
"msg_contents": "Yu-Ju Hong wrote:\n> 2. Moreover, the disk utilization was high and the \"await\" time from \n> iostat is around 500 ms. Could disk I/O limit the overall throughput? \n> The server has 2 SATA disks, one for system and postgresql and the \n> other is dedicated to logging (pg_xlog). As far as I understand, \n> modern database systems should be CPU-bound rather than I/O-bound, is \n> it because I did not perform adequate performance tuning?\n\ndbt2 is almost exclusively disk I/O bound once the data set gets big \nenough. There are some applications where most of the data fits in RAM \nand therefore CPU performance is the limiter. dbt2 is exactly the \nopposite of such an application though, and the idea that \"modern \ndatabase systems should be CPU bound\" is not really true at all. That's \nonly the case if the data you're operating on fits in RAM. Otherwise, \ndatabases are just as I/O bound as they've always been. Main thing \nthat's changed is there's a lot more RAM in systems nowadays.\n\nBy the way: a large increase in checkpoint_segments is the first thing \nyou should do. If you check the database logs, they're probably filled \nwith complaints about it being too low. 32 would be a useful starting \nvalue, going much higher for a test that's only 10 minutes long is \nprobably cheating.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 25 Feb 2010 17:48:10 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dbt2 performance"
},
{
"msg_contents": "Thanks for the reply.\n\nOn Thu, Feb 25, 2010 at 5:48 PM, Greg Smith <[email protected]> wrote:\n\n> Yu-Ju Hong wrote:\n>\n>> 2. Moreover, the disk utilization was high and the \"await\" time from\n>> iostat is around 500 ms. Could disk I/O limit the overall throughput? The\n>> server has 2 SATA disks, one for system and postgresql and the other is\n>> dedicated to logging (pg_xlog). As far as I understand, modern database\n>> systems should be CPU-bound rather than I/O-bound, is it because I did not\n>> perform adequate performance tuning?\n>>\n>\n> dbt2 is almost exclusively disk I/O bound once the data set gets big\n> enough. There are some applications where most of the data fits in RAM and\n> therefore CPU performance is the limiter. dbt2 is exactly the opposite of\n> such an application though, and the idea that \"modern database systems\n> should be CPU bound\" is not really true at all. That's only the case if the\n> data you're operating on fits in RAM. Otherwise, databases are just as I/O\n> bound as they've always been. Main thing that's changed is there's a lot\n> more RAM in systems nowadays.\n>\n\nIn my test, there was almost no disk reads (mostly disk writes), so I\nassumed the size of the database didn't cause the performance bottleneck.\nMaybe I was wrong. If so, should I increase shared_buffer?\n\nAssuming that dbt2 was limited by disk I/O in my experiments, do you think\nthe numbers I got with my server configuration are reasonable?\n\nAlso, would you mind giving some examples where the applications are CPU\nbound? That could be useful information to me.\n\n>\n> By the way: a large increase in checkpoint_segments is the first thing you\n> should do. If you check the database logs, they're probably filled with\n> complaints about it being too low. 32 would be a useful starting value,\n> going much higher for a test that's only 10 minutes long is probably\n> cheating.\n>\n>\nI increased the checkpoint_segments to 10 when I ran the tests. I'll\ncertainly increase it to 32 and give it a try.\n\n\n\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us <http://www.2ndquadrant.us/>\n>\n>\nThanks,\nYu-Ju\n\nThanks for the reply.On Thu, Feb 25, 2010 at 5:48 PM, Greg Smith <[email protected]> wrote:\n\nYu-Ju Hong wrote:\n\n2. Moreover, the disk utilization was high and the \"await\" time from\niostat is around 500 ms. Could disk I/O limit the overall throughput?\nThe server has 2 SATA disks, one for system and postgresql and the\nother is dedicated to logging (pg_xlog). As far as I understand, modern\ndatabase systems should be CPU-bound rather than I/O-bound, is it\nbecause I did not perform adequate performance tuning?\n\n\ndbt2 is almost exclusively disk I/O bound once the data set gets big\nenough. There are some applications where most of the data fits in RAM\nand therefore CPU performance is the limiter. dbt2 is exactly the\nopposite of such an application though, and the idea that \"modern\ndatabase systems should be CPU bound\" is not really true at all.\n That's only the case if the data you're operating on fits in RAM.\n Otherwise, databases are just as I/O bound as they've always been.\n Main thing that's changed is there's a lot more RAM in systems\nnowadays.\nIn my test, there was almost no disk reads\n(mostly disk writes), so I assumed the size of the database didn't\ncause the performance bottleneck. Maybe I was wrong. If so, should I\nincrease shared_buffer?\nAssuming that dbt2 was limited by disk I/O in my experiments, do\nyou think the numbers I got with my server configuration are reasonable?Also, would you mind giving some examples where the applications are CPU bound? That could be useful information to me. \n\n\nBy the way: a large increase in checkpoint_segments is the first thing\nyou should do. If you check the database logs, they're probably filled\nwith complaints about it being too low. 32 would be a useful starting\nvalue, going much higher for a test that's only 10 minutes long is\nprobably cheating.\n\nI increased the checkpoint_segments to 10 when I ran the tests. I'll certainly increase it to 32 and give it a try. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\nThanks,Yu-Ju",
"msg_date": "Thu, 25 Feb 2010 18:29:42 -0500",
"msg_from": "Yu-Ju Hong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dbt2 performance"
},
{
"msg_contents": "On Thu, Feb 25, 2010 at 6:29 PM, Yu-Ju Hong <[email protected]> wrote:\n> Thanks for the reply.\n>\n> On Thu, Feb 25, 2010 at 5:48 PM, Greg Smith <[email protected]> wrote:\n>>\n>> Yu-Ju Hong wrote:\n>>>\n>>> 2. Moreover, the disk utilization was high and the \"await\" time from\n>>> iostat is around 500 ms. Could disk I/O limit the overall throughput? The\n>>> server has 2 SATA disks, one for system and postgresql and the other is\n>>> dedicated to logging (pg_xlog). As far as I understand, modern database\n>>> systems should be CPU-bound rather than I/O-bound, is it because I did not\n>>> perform adequate performance tuning?\n>>\n>> dbt2 is almost exclusively disk I/O bound once the data set gets big\n>> enough. There are some applications where most of the data fits in RAM and\n>> therefore CPU performance is the limiter. dbt2 is exactly the opposite of\n>> such an application though, and the idea that \"modern database systems\n>> should be CPU bound\" is not really true at all. That's only the case if the\n>> data you're operating on fits in RAM. Otherwise, databases are just as I/O\n>> bound as they've always been. Main thing that's changed is there's a lot\n>> more RAM in systems nowadays.\n>\n> In my test, there was almost no disk reads (mostly disk writes), so I\n> assumed the size of the database didn't cause the performance bottleneck.\n> Maybe I was wrong. If so, should I increase shared_buffer?\n\nWell if you're writing a lot of stuff to disk you could easily be I/O limited.\n\n> Assuming that dbt2 was limited by disk I/O in my experiments, do you think\n> the numbers I got with my server configuration are reasonable?\n\nSince you've provided no details on your hardware configuration I'm\nnot sure how anyone could express an educated opinion on this\n(personally I wouldn't know anyway, but others here would).\n\n> Also, would you mind giving some examples where the applications are CPU\n> bound? That could be useful information to me.\n\nYou'll typically be CPU bound when you're not I/O bound - i.e. when\nthe data that your accessing is small enough to fit in memory.\n\n...Robert\n",
"msg_date": "Wed, 3 Mar 2010 12:09:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dbt2 performance"
}
] |
[
{
"msg_contents": "Okay ladies and gents and the rest of you :)\n\nIt's time I dig into another issue, and that's a curious 5 second\ndelay on connect, on occasion. Now, I believe the network to be sound\nand there are zero errors on any servers, no retrans, no runts, nada\nnada nada. However I will continue to run tests with/without dns,\ntcpdumps, tracking communications, handshakes, however.\n\nI've been doing some reading and what did I find. I found \"Checkpoints\nare very disrupting to your database performance and can cause\nconnections to stall for up to a few seconds while they occur.\"\n\nNow I'm reaching here, but wondered.\n\nMy issue is what appears to be a random 5 second connection delay.\n\nConnection times over a 24 hour period from my application servers,\nwhich connect to my query servers (postgres 8.3.4, slony 1.2.15), big\n8 way boxes with 16-32gb ram\n\nConn time in seconds and count. (50 kind of a round even number eh!)\n '1.0-2.0' => 4,\n '5.0-' => 50,\n '0.5-1.0' => 6,\n '0.0-0.5' => 155,632\n\nSo the 5 second makes us instantly think network but I want to hit\nthis from multiple angles, so figured I would reach out to the brain\ntrust here for some ideas.\n\nHere is what may be interesting and may point me into a direction of\nfurther tuning and adjusting.\n\npostgres=# select * from pg_stat_bgwriter;\n checkpoints_timed | checkpoints_req | buffers_checkpoint |\nbuffers_clean | maxwritten_clean | buffers_backend | buffers_alloc\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n 34820 | 207 | 444406118 |\n214634102 | 1274873 | 333375850 | 3370607455\n(1 row)\n\nNow I'll be honest, I have nothing special in my configs for\nbg_writer, in fact other than enabling log_checkpoints today, I have\nnothing but defaults for bg_*. but the fact that checkpoints_timed is\nso high, seams to maybe point to some required tuning?\n\nThings of interest.\n\nMemory 24gb\n8 way processor\npostgres 8.3.4\n\nshared_buffers = 1500MB # min 128kB or max_connections*16kB\nmax_prepared_transactions = 0 # can be 0 or more\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 128MB # min 1MB\nmax_fsm_pages = 500000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 225 # min 100, ~70 bytes each\nfsync = off # turns forced\nsynchronization on or off\ncheckpoint_segments = 100 # in logfile segments, min 1, 16MB each\ncheckpoint_warning = 3600s # 0 is off\n\nSo nothing exciting in my configs (300 connections)..\n\nSo my guess is \"IF\" a really big if, the delay is actually due to\ncheckpoints (perfect lil storm, timing) than there should be ways for\nme to tune this ya?\n\nThanks \"again\" for your assistance\n\nTory\n",
"msg_date": "Thu, 25 Feb 2010 22:12:25 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Thu, Feb 25, 2010 at 10:12 PM, Tory M Blue <[email protected]> wrote:\n> Okay ladies and gents and the rest of you :)\n>\n> It's time I dig into another issue, and that's a curious 5 second\n> delay on connect, on occasion. Now, I believe the network to be sound\n> and there are zero errors on any servers, no retrans, no runts, nada\n> nada nada. However I will continue to run tests with/without dns,\n> tcpdumps, tracking communications, handshakes, however.\n>\n> I've been doing some reading and what did I find. I found \"Checkpoints\n> are very disrupting to your database performance and can cause\n> connections to stall for up to a few seconds while they occur.\"\n\nQuick added note, sorry.\n\n\"checkpoint_completion_target (floating point)\n\n Specifies the target length of checkpoints, as a fraction of the\ncheckpoint interval. The default is 0.5. This parameter can only be\nset in the postgresql.conf file or on the server command line. \"\n\ninteresting that it's a .5 second default setting and I'm seeing\nexactly that .5 second delay.\n\nAgain could be reaching here!\n\nThanks for putting up with me\n\nTory\n",
"msg_date": "Thu, 25 Feb 2010 22:20:38 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "Friday, February 26, 2010, 7:20:38 AM you wrote:\n\n> \"checkpoint_completion_target (floating point)\n\n> interesting that it's a .5 second default setting and I'm seeing\n> exactly that .5 second delay.\n \nIt's not an exact time, but a multiplier to 'checkpoint_timeout'. So a\nsetting of .5 with a timeout of 300 seconds means a checkpoint should be\ncompleted after 300*0.5 = 150 seconds.\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Fri, 26 Feb 2010 07:29:22 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:\n> shared_buffers = 1500MB \n\nSome people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),\nif needed. Please note that more shared_buffers will lead to more\npressure on bgwriter, but it also has lots of benefits, too.\n\n> work_mem = 100MB\n\nThis is too much. Since you have 300 connections, you will probably swap\nbecause of this setting, since each connection may use this much\nwork_mem. The rule of the thumb is to set this to a lower general value\n(say, 1-2 MB), and set it per-query when needed.\n\n> checkpoint_segments = 100\n> checkpoint_warning = 3600s\n\nWhat about checkpoint_timeout? Please note that even if\ncheckpoint_segments = 100, if timeout value is low (say 5 mins),\nPostgreSQL will probably checkpoint every checkpoint_timeout minutes\n(unless PostgreSQL creates $checkpoint_segments xlogs before\ncheckpoint_timeout value). Depending on your workload, this may not be\nintended, and it may cause spikes -- which will lead to the issues you\ncomplain.\n\nI'll stop here, and suggest you read this:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nfor details about this subject. As noted there, if you are running 8.3+,\npg_stat_bgwriter will help you to tune checkpoint & bgwriter settings.\n\n-HTH.\n \n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Fri, 26 Feb 2010 08:42:21 +0200",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "2010/2/25 Devrim GÜNDÜZ <[email protected]>:\n> On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:\n>> shared_buffers = 1500MB\n>\n> Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),\n> if needed. Please note that more shared_buffers will lead to more\n> pressure on bgwriter, but it also has lots of benefits, too.\n>\n>> work_mem = 100MB\n>\n> This is too much. Since you have 300 connections, you will probably swap\n> because of this setting, since each connection may use this much\n> work_mem. The rule of the thumb is to set this to a lower general value\n> (say, 1-2 MB), and set it per-query when needed.\n>\n>> checkpoint_segments = 100\n>> checkpoint_warning = 3600s\n>\n> What about checkpoint_timeout? Please note that even if\n> checkpoint_segments = 100, if timeout value is low (say 5 mins),\n> PostgreSQL will probably checkpoint every checkpoint_timeout minutes\n> (unless PostgreSQL creates $checkpoint_segments xlogs before\n> checkpoint_timeout value). Depending on your workload, this may not be\n> intended, and it may cause spikes -- which will lead to the issues you\n> complain.\n>\n> I'll stop here, and suggest you read this:\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>\n> for details about this subject. As noted there, if you are running 8.3+,\n> pg_stat_bgwriter will help you to tune checkpoint & bgwriter settings.\n>\n> -HTH.\n\nCheckpoint_timeout is the default and that looks like 5 mins (300\nseconds). And is obviously why I have such a discrepancy between time\nreached and requested.\n\nThank you sir, that's actually the page that I've spent much of my\ntime on this eve :) I'll continue to read and check my configuration\nsettings.\n\nTory\n",
"msg_date": "Thu, 25 Feb 2010 23:01:36 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "2010/2/25 Tory M Blue <[email protected]>:\n> 2010/2/25 Devrim GÜNDÜZ <[email protected]>:\n>> On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:\n>>> shared_buffers = 1500MB\n>>\n>> Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),\n>> if needed. Please note that more shared_buffers will lead to more\n>> pressure on bgwriter, but it also has lots of benefits, too.\n>>\n>>> work_mem = 100MB\n>>\n>> This is too much. Since you have 300 connections, you will probably swap\n>> because of this setting, since each connection may use this much\n>> work_mem. The rule of the thumb is to set this to a lower general value\n>> (say, 1-2 MB), and set it per-query when needed.\n>>\n>>> checkpoint_segments = 100\n>>> checkpoint_warning = 3600s\n>>\n>> What about checkpoint_timeout? Please note that even if\n>> checkpoint_segments = 100, if timeout value is low (say 5 mins),\n>> PostgreSQL will probably checkpoint every checkpoint_timeout minutes\n>> (unless PostgreSQL creates $checkpoint_segments xlogs before\n>> checkpoint_timeout value). Depending on your workload, this may not be\n>> intended, and it may cause spikes -- which will lead to the issues you\n>> complain.\n>>\n>> I'll stop here, and suggest you read this:\n>>\n>> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>>\n>> for details about this subject. As noted there, if you are running 8.3+,\n>> pg_stat_bgwriter will help you to tune checkpoint & bgwriter settings.\n>>\n>> -HTH.\n>\n> Checkpoint_timeout is the default and that looks like 5 mins (300\n> seconds). And is obviously why I have such a discrepancy between time\n> reached and requested.\n>\n> Thank you sir, that's actually the page that I've spent much of my\n> time on this eve :) I'll continue to read and check my configuration\n> settings.\n>\n> Tory\nAlso since I set the log on today I have some log information\nregarding the checkpoints\n2010-02-25 22:08:13 PST LOG: checkpoint starting: time\n2010-02-25 22:10:41 PST LOG: checkpoint complete: wrote 44503\nbuffers (23.2%); 0 transaction log file(s) added, 0 removed, 20\nrecycled; write=148.539 s, sync=0.000 s, total=148.540 s\n2010-02-25 22:13:13 PST LOG: checkpoint starting: time\n2010-02-25 22:15:37 PST LOG: checkpoint complete: wrote 38091\nbuffers (19.8%); 0 transaction log file(s) added, 0 removed, 20\nrecycled; write=144.713 s, sync=0.000 s, total=144.714 s\n2010-02-25 22:18:13 PST LOG: checkpoint starting: time\n2010-02-25 22:20:42 PST LOG: checkpoint complete: wrote 38613\nbuffers (20.1%); 0 transaction log file(s) added, 0 removed, 19\nrecycled; write=149.870 s, sync=0.000 s, total=149.871 s\n2010-02-25 22:23:13 PST LOG: checkpoint starting: time\n2010-02-25 22:25:42 PST LOG: checkpoint complete: wrote 39009\nbuffers (20.3%); 0 transaction log file(s) added, 0 removed, 19\nrecycled; write=149.876 s, sync=0.000 s, total=149.877 s\n2010-02-25 22:28:13 PST LOG: checkpoint starting: time\n2010-02-25 22:30:43 PST LOG: checkpoint complete: wrote 30847\nbuffers (16.1%); 0 transaction log file(s) added, 0 removed, 19\nrecycled; write=150.000 s, sync=0.000 s, total=150.001 s\n2010-02-25 22:33:13 PST LOG: checkpoint starting: time\n2010-02-25 22:35:43 PST LOG: checkpoint complete: wrote 11917\nbuffers (6.2%); 0 transaction log file(s) added, 0 removed, 14\nrecycled; write=150.064 s, sync=0.000 s, total=150.065 s\n2010-02-25 22:38:13 PST LOG: checkpoint starting: time\n2010-02-25 22:40:43 PST LOG: checkpoint complete: wrote 10869\nbuffers (5.7%); 0 transaction log file(s) added, 0 removed, 5\nrecycled; write=149.995 s, sync=0.000 s, total=149.996 s\n2010-02-25 22:43:13 PST LOG: checkpoint starting: time\n2010-02-25 22:45:41 PST LOG: checkpoint complete: wrote 31424\nbuffers (16.4%); 0 transaction log file(s) added, 0 removed, 4\nrecycled; write=148.597 s, sync=0.000 s, total=148.598 s\n2010-02-25 22:48:13 PST LOG: checkpoint starting: time\n2010-02-25 22:50:42 PST LOG: checkpoint complete: wrote 33895\nbuffers (17.7%); 0 transaction log file(s) added, 0 removed, 17\nrecycled; write=149.825 s, sync=0.000 s, total=149.826 s\n2010-02-25 22:53:13 PST LOG: checkpoint starting: time\n2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected EOF\non client connection\n2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155\nbuffers (17.8%); 0 transaction log file(s) added, 0 removed, 15\nrecycled; write=150.045 s, sync=0.000 s, total=150.046 s\n2010-02-25 22:58:13 PST LOG: checkpoint starting: time\n2010-02-25 23:00:41 PST LOG: checkpoint complete: wrote 33873\nbuffers (17.6%); 0 transaction log file(s) added, 0 removed, 15\nrecycled; write=148.223 s, sync=0.000 s, total=148.224 s\n2010-02-25 23:03:13 PST LOG: checkpoint starting: time\n",
"msg_date": "Thu, 25 Feb 2010 23:04:13 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Thu, 2010-02-25 at 23:01 -0800, Tory M Blue wrote:\n\n> Checkpoint_timeout is the default and that looks like 5 mins (300\n> seconds). And is obviously why I have such a discrepancy between time\n> reached and requested. \n\nIf you have a high load, you may want to start tuning with 15 minutes,\nand bump it to 30 mins if needed. Also you may want to decrease segments\nvalue based on your findings, since increasing only one of them won't\nhelp you a lot.\n\nAs I wrote before, pg_stat_bgwriter is your friend here.\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Fri, 26 Feb 2010 09:20:16 +0200",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "Tory M Blue wrote:\n> 2010-02-25 22:10:41 PST LOG: checkpoint complete: wrote 44503\n> buffers (23.2%); 0 transaction log file(s) added, 0 removed, 20\n> recycled; write=148.539 s, sync=0.000 s, total=148.540 s\n> \nThis one is typical for your list so I'll only comment on it. This is \nwriting out 350MB spread over 148 seconds, which means your background \ncheckpoint I/O is about a 2.4MB/s stream. That's a moderate amount that \ncould be tough for some systems, but note that your \"sync\" time is \nnothing. Generally, when someone sees a long pause that's caused by a \ncheckpoint, the sync number is really high. Your disks seem to be \nkeeping up with the checkpoint overhead moderately well. (Mind you, the \nzero sync time is because you have 'fsync = off ', which will eventually \nresult in your database being corrupted after a badly timed outage one \nday; you really don't want to do that)\n\nMy guess is your connections are doing some sort of DNS operation that \nperiodically stalls waiting for a 5-second timeout. There's nothing in \nyour checkpoint data suggesting it's a likely cause of a delay that \nlong--and it would be a lot more random if that were the case, too. Bad \ncheckpoint spikes will be seconds sometimes, no time at all others; a \nheavy grouping at 5 seconds doesn't match the pattern they have at all.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 26 Feb 2010 04:24:05 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "2010/2/25 Devrim GÜNDÜZ <[email protected]>:\n> On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:\n>> shared_buffers = 1500MB\n>\n> Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),\n> if needed. Please note that more shared_buffers will lead to more\n> pressure on bgwriter, but it also has lots of benefits, too.\n>\n>> work_mem = 100MB\n>\n> This is too much. Since you have 300 connections, you will probably swap\n> because of this setting, since each connection may use this much\n> work_mem. The rule of the thumb is to set this to a lower general value\n> (say, 1-2 MB), and set it per-query when needed.\n\nI'm slightly confused. Most things I've read, including running\npg_tune for grins puts this around 100MB, 98MB for pgtune. 1-2MB just\nseems really low to me. And Ignore the 300 connections, thats an upper\nlimit, I usually run a max of 40-45 but usually around 20 connections\nper sec.\n\n\nAlso is there a way to log if there are any deadlocks happening (I'm\nnot seeing them in logs)\ndeadlock_timeout = 5s\n\nTory\n",
"msg_date": "Fri, 26 Feb 2010 10:52:07 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "2010/2/25 Devrim GÜNDÜZ <[email protected]>:\n> On Thu, 2010-02-25 at 23:01 -0800, Tory M Blue wrote:\n>\n>> Checkpoint_timeout is the default and that looks like 5 mins (300\n>> seconds). And is obviously why I have such a discrepancy between time\n>> reached and requested.\n>\n> If you have a high load, you may want to start tuning with 15 minutes,\n> and bump it to 30 mins if needed. Also you may want to decrease segments\n> value based on your findings, since increasing only one of them won't\n> help you a lot.\n>\n> As I wrote before, pg_stat_bgwriter is your friend here.\n\nActually these servers have almost no load. Really they run very cool\n\nLoad Average (for the day):\nCur: 0.16 Load\nAvg: 0.22 Load\n\nSo I don't think it's load No network issues (that I've found) and\nwhile the server will eventually eat all the memory it's currently\nsitting with 4 gb free.\n\nMem Total (for the day):\nCur: 25.55 GBytes\nAvg: 25.55 GBytes\nMax: 25.55 GBytes\n\tMem Available (for the day):\nCur: 4.72 GBytes\nAvg: 5.15 GBytes\nMax: 5.71 GBytes\n\tBytes Used (for the day):\nCur: 20.83 GBytes\nAvg: 20.40 GBytes\nMax: 21.20 GBytes\n\n\nThanks for your pointers, I'm continuing to look and will do some\ntests today. I also hear you about fsync and will do some testing here\nto see why this was set (been running for 4-6 years), and that setting\nwas probably set way way back in the day and it's survived each\nupgrade/hardware/storage update.\n\nTory\n",
"msg_date": "Fri, 26 Feb 2010 11:01:55 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "\n\n>>> Tory M Blue <[email protected]> 02/26/10 12:52 PM >>>\n>>\n>> This is too much. Since you have 300 connections, you will probably swap\n>> because of this setting, since each connection may use this much\n>> work_mem. The rule of the thumb is to set this to a lower general value\n>> (say, 1-2 MB), and set it per-query when needed.\n>\n> I'm slightly confused. Most things I've read, including running\n> pg_tune for grins puts this around 100MB, 98MB for pgtune. 1-2MB just\n> seems really low to me. And Ignore the 300 connections, thats an upper\n> limit, I usually run a max of 40-45 but usually around 20 connections\n>per sec.\n\nIt has been said in the list before that pg_tune is extremely aggressive when it comes to work_mem.\n\n100MB is just a whole lot of memory for something that is dedicated mostly to sorting. Some of my relatively heavy duty queries, which end up manipulating hundreds of thousands of rows in subqueries, do just fine with quite a bit less.\n\n1-2MB is good enough for many families of queries, but it's hard to say what the right default should be for you. The right number can be estimated by running explain analyze on your most common queries, with parameters that are representative to regular use, and see how much memory they actually claim to use. In my case, for example, most of my queries do just fine with 10 MB, while the reporting queries that accumulate quite a bit of deta request up to 60MB.\n\nIf your average query needs 100 MB, it'd still mean that 40 connections take 4 gigs worth of work memory, which might be better spent caching the database.\n\nNow, if your system is so over-specced that wasting a few gigs of RAM doesn't impact your performance one bit, then you might not have to worry about this at all.\n\n\n",
"msg_date": "Fri, 26 Feb 2010 13:49:31 -0600",
"msg_from": "\"Jorge Montero\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: Tory M Blue\n> \n> 2010/2/25 Devrim GÜNDÜZ <[email protected]>:\n> > On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:\n> >> shared_buffers = 1500MB\n> >\n> > Some people tend to increase this to 2.2GB(32-bit) or 4-6 \n> GB (64 bit), \n> > if needed. Please note that more shared_buffers will lead to more \n> > pressure on bgwriter, but it also has lots of benefits, too.\n> >\n> >> work_mem = 100MB\n> >\n> > This is too much. Since you have 300 connections, you will probably \n> > swap because of this setting, since each connection may use \n> this much \n> > work_mem. The rule of the thumb is to set this to a lower general \n> > value (say, 1-2 MB), and set it per-query when needed.\n> \n> I'm slightly confused. Most things I've read, including \n> running pg_tune for grins puts this around 100MB, 98MB for \n> pgtune. 1-2MB just seems really low to me. And Ignore the \n> 300 connections, thats an upper limit, I usually run a max of \n> 40-45 but usually around 20 connections per sec.\n> \n\nIf you have maximum 45 users running simultaneously a rather complex query\nthat requires... say 3 sorts, thats 45 x 100MB x 3 = 13.5 GB of RAM used up\nbecause of this particular work_mem setting. Doesn't mean it will happen\njust that your settings make this scenario possible.\n\nSo, to avoid this scenario, the suggestion is to keep work_mem low and\nadjust it per query if required. I find 1-2 MB too low for my particular\nrequirements so I have it in 8 MB. Anyway, due to your server having a lot\nof RAM your setting might make sense. But all that memory would probably be\nbetter used if it was available for caching.\n\n> \n> Also is there a way to log if there are any deadlocks \n> happening (I'm not seeing them in logs) deadlock_timeout = 5s\n> \n\nIn postgresql.conf:\nlog_lock_waits = on # log lock waits >= deadlock_timeout\n\n\nRegards,\nFernando.\n\n",
"msg_date": "Fri, 26 Feb 2010 17:01:22 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Fri, Feb 26, 2010 at 11:49 AM, Jorge Montero\n<[email protected]> wrote:\n>\n>\n>>>> Tory M Blue <[email protected]> 02/26/10 12:52 PM >>>\n>>>\n>>> This is too much. Since you have 300 connections, you will probably swap\n>>> because of this setting, since each connection may use this much\n>>> work_mem. The rule of the thumb is to set this to a lower general value\n>>> (say, 1-2 MB), and set it per-query when needed.\n\n>\n> 1-2MB is good enough for many families of queries, but it's hard to say what the right default should be for you. The right number can be estimated by running explain analyze on your most common queries, with parameters that are representative to regular use, and see how much memory they actually claim to use. In my case, for example, most of my queries do just fine with 10 MB, while the reporting queries that accumulate quite a bit of deta request up to 60MB.\n>\n> If your average query needs 100 MB, it'd still mean that 40 connections take 4 gigs worth of work memory, which might be better spent caching the database.\n\nYa my boxes are pretty well stacked, but a question. How does one get\nthe memory usage of a query. You state to look at explain analyze but\nthis gives timing and costs, but is one of the numbers memory or do I\nhave to take values and do some math?\n\n--------------------------------------------------------------------------------------------------------------------------\n Function Scan on listings_search (cost=0.00..260.00 rows=1000\nwidth=108) (actual time=904.374..904.383 rows=10 loops=1)\n Total runtime: 904.411 ms\n\nThanks\nTory\n\nAlso don't think this 5 second thing is the DB.. Sure is not checkpoints.\n",
"msg_date": "Fri, 26 Feb 2010 15:24:42 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Fri, Feb 26, 2010 at 6:24 PM, Tory M Blue <[email protected]> wrote:\n> Ya my boxes are pretty well stacked, but a question. How does one get\n> the memory usage of a query. You state to look at explain analyze but\n> this gives timing and costs, but is one of the numbers memory or do I\n> have to take values and do some math?\n\nThere's really not good instrumentation on memory usage, except for\nsorts. 9.0 will add a bit of instrumentation for hashes.\n\nI might try 'top'.\n\n...Robert\n",
"msg_date": "Wed, 3 Mar 2010 14:33:59 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm using postgresql 8.4\n\nI need to install multiple postgresql dbs on one server but I have some\nquestions:\n\n-Is there any problems (performance wise or other) if I have 10 to 15 DBs on\nthe same server?\n\n-Each DB needs 10 tablespaces, so if I create 10 different tablespaces for\neach DB I will have 100 to 150 table space on the same server. So can this\nalso cause any problems?\n\nThanks \n\n\n\n\n\n\n\n\n\n\n\nHi,\nI’m using postgresql 8.4\nI need to install multiple postgresql dbs on one server but I\nhave some questions:\n-Is there any problems (performance wise or other) if I have\n10 to 15 DBs on the same server?\n-Each DB needs 10 tablespaces, so if I create 10 different\ntablespaces for each DB I will have 100 to 150 table space on the same server. So\ncan this also cause any problems?\nThanks",
"msg_date": "Fri, 26 Feb 2010 11:37:17 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multiple data base on same server"
},
{
"msg_contents": "On 26/02/10 09:37, elias ghanem wrote:\n> Hi,\n>\n> I'm using postgresql 8.4\n>\n> I need to install multiple postgresql dbs on one server but I have some\n> questions:\n>\n> -Is there any problems (performance wise or other) if I have 10 to 15 DBs on\n> the same server?\n\nClearly that's going to depend on what they're all doing and how big a \nserver you have. There's no limitation in PostgreSQL that stops you though.\n\n> -Each DB needs 10 tablespaces, so if I create 10 different tablespaces for\n> each DB I will have 100 to 150 table space on the same server. So can this\n> also cause any problems?\n\nDo you have 200-300+ disks to put these tablespaces on? If not, I'm not \nclear what you are trying to do. Why does each DB need 10 tablespaces?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 26 Feb 2010 11:44:09 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple data base on same server"
},
{
"msg_contents": "Hi,\nThanks for your answer,\nConcerning the second point, each db have different table that are logically\nrelated (for ex, tables for configuration, tables for business...) plus I'm\nplanning to put the indexes on their own tablespaces.\nConcerning the disks I will maybe stored on multiple disks (but surely not\n200-300). So I'm just wondering If this big number of tablespaces on a same\ndb server may cause problems,\nThanks again.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Richard Huxton\nSent: Friday, February 26, 2010 1:44 PM\nTo: elias ghanem\nCc: [email protected]\nSubject: Re: [PERFORM] Multiple data base on same server\n\nOn 26/02/10 09:37, elias ghanem wrote:\n> Hi,\n>\n> I'm using postgresql 8.4\n>\n> I need to install multiple postgresql dbs on one server but I have some\n> questions:\n>\n> -Is there any problems (performance wise or other) if I have 10 to 15 DBs\non\n> the same server?\n\nClearly that's going to depend on what they're all doing and how big a \nserver you have. There's no limitation in PostgreSQL that stops you though.\n\n> -Each DB needs 10 tablespaces, so if I create 10 different tablespaces for\n> each DB I will have 100 to 150 table space on the same server. So can this\n> also cause any problems?\n\nDo you have 200-300+ disks to put these tablespaces on? If not, I'm not \nclear what you are trying to do. Why does each DB need 10 tablespaces?\n\n-- \n Richard Huxton\n Archonet Ltd\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 Feb 2010 14:45:50 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multiple data base on same server"
},
{
"msg_contents": "On 26/02/10 12:45, elias ghanem wrote:\n> Hi,\n> Thanks for your answer,\n> Concerning the second point, each db have different table that are logically\n> related (for ex, tables for configuration, tables for business...) plus I'm\n> planning to put the indexes on their own tablespaces.\n> Concerning the disks I will maybe stored on multiple disks (but surely not\n> 200-300). So I'm just wondering If this big number of tablespaces on a same\n> db server may cause problems,\n\nIf the tablespaces aren't on different disks, I'm not sure what the \npoint is.\n\nDo you perhaps mean schemas? So you have e.g. a \"system\" schema with \ntables \"users\", \"activity_log\" etc? There's no problem with 20-30 \nschemas per database.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 26 Feb 2010 13:03:33 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple data base on same server"
},
{
"msg_contents": "Richard Huxton wrote:\n> On 26/02/10 12:45, elias ghanem wrote:\n>> Hi,\n>> Thanks for your answer,\n>> Concerning the second point, each db have different table that are \n>> logically\n>> related (for ex, tables for configuration, tables for business...) \n>> plus I'm\n>> planning to put the indexes on their own tablespaces.\n>> Concerning the disks I will maybe stored on multiple disks (but surely \n>> not\n>> 200-300). So I'm just wondering If this big number of tablespaces on a \n>> same\n>> db server may cause problems,\n> \n> If the tablespaces aren't on different disks, I'm not sure what the \n> point is.\n\nOur policy is that *every* database has its own tablespace. It doesn't cost you anything, and it gives you great flexibility if you add new disks. You can easily move an entire database, or a bunch of databases, by just moving the data pointing to the new location with symlinks. Once you put a bunch of databases into a single tablespace, moving subsets of them becomes very difficult.\n\nIt also makes it really easy to find who is using resources.\n\nWe operate about 450 databases spread across several servers. Postgres has no trouble at all managing hundreds of databases.\n\nCraig\n",
"msg_date": "Fri, 26 Feb 2010 06:34:20 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple data base on same server"
},
{
"msg_contents": "Ok thanks guys for your time\n\n-----Original Message-----\nFrom: Craig James [mailto:[email protected]] \nSent: Friday, February 26, 2010 4:34 PM\nTo: Richard Huxton\nCc: elias ghanem; [email protected]\nSubject: Re: [PERFORM] Multiple data base on same server\n\nRichard Huxton wrote:\n> On 26/02/10 12:45, elias ghanem wrote:\n>> Hi,\n>> Thanks for your answer,\n>> Concerning the second point, each db have different table that are \n>> logically\n>> related (for ex, tables for configuration, tables for business...) \n>> plus I'm\n>> planning to put the indexes on their own tablespaces.\n>> Concerning the disks I will maybe stored on multiple disks (but surely \n>> not\n>> 200-300). So I'm just wondering If this big number of tablespaces on a \n>> same\n>> db server may cause problems,\n> \n> If the tablespaces aren't on different disks, I'm not sure what the \n> point is.\n\nOur policy is that *every* database has its own tablespace. It doesn't cost\nyou anything, and it gives you great flexibility if you add new disks. You\ncan easily move an entire database, or a bunch of databases, by just moving\nthe data pointing to the new location with symlinks. Once you put a bunch\nof databases into a single tablespace, moving subsets of them becomes very\ndifficult.\n\nIt also makes it really easy to find who is using resources.\n\nWe operate about 450 databases spread across several servers. Postgres has\nno trouble at all managing hundreds of databases.\n\nCraig\n",
"msg_date": "Fri, 26 Feb 2010 16:46:17 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Multiple data base on same server"
},
{
"msg_contents": "elias ghanem escribi�:\n>\n> Hi,\n>\n> I�m using postgresql 8.4\n>\n> I need to install multiple postgresql dbs on one server but I have \n> some questions:\n>\n> -Is there any problems (performance wise or other) if I have 10 to 15 \n> DBs on the same server?\n>\n> -Each DB needs 10 tablespaces, so if I create 10 different tablespaces \n> for each DB I will have 100 to 150 table space on the same server. So \n> can this also cause any problems?\n>\n> Thanks\n>\nIt�s depends of the features of the server. If is a good server, for \nexample of 16 GB to 32 of RAM, with 8 a 16 processors, with a good SAN \nwith RAID -1 for the pg_xlog directory and RAID-10 for the $PG_DATA \nusing ZFS if you are using Solaris or FreeBSD and xfs or ext3 using \nLinux , on a Operating Systems of 64 bits, I think that this load can be \nsupported.\n\nThere are installations of PostgreSQL with more than 400 db, but the \nenvironment is very distribuided on several servers.\nAbout the tablespaces, It�s very necesary to have 10 tablespaces on each \ndatabase? Normally, you can separate the table or the tables with more \nactivity to a rapid disc array (I �m thinking on a SSD array), other \ntablespace for the indexes if you have many, and for example with \npl/proxy you could handle the partitions of your data.\nThere is not necessaty to have 100 or 150 tablespaces on the same \nserver. You can separate this on a SAN, you can have two or more main \nPostgreSQL servers and several slaves with the data replicated on any \ncase of data corruption on the main servers.\nRebember look the configuration of the performance of the PostgreSQL \nservers: work_mem, shared_buffers, etc\nRegards and I hope that comments helps to you.\n\n",
"msg_date": "Fri, 26 Feb 2010 09:53:26 -0500",
"msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Multiple data base on same server"
}
] |
[
{
"msg_contents": "Tory M Blue wrote:\n \n> 2010-02-25 22:53:13 PST LOG: checkpoint starting: time\n> 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected\n> EOF on client connection\n> 2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155\n> buffers (17.8%); 0 transaction log file(s) added, 0 removed, 15\n> recycled; write=150.045 s, sync=0.000 s, total=150.046 s\n \nDid that unexpected EOF correspond to a connection attempt that gave\nup based on time?\n \n-Kevin\n\n",
"msg_date": "Fri, 26 Feb 2010 07:09:03 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Fri, Feb 26, 2010 at 5:09 AM, Kevin Grittner\n<[email protected]> wrote:\n> Tory M Blue wrote:\n>\n>> 2010-02-25 22:53:13 PST LOG: checkpoint starting: time\n>> 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected\n>> EOF on client connection\n>> 2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155\n>> buffers (17.8%); 0 transaction log file(s) added, 0 removed, 15\n>> recycled; write=150.045 s, sync=0.000 s, total=150.046 s\n>\n> Did that unexpected EOF correspond to a connection attempt that gave\n> up based on time?\n>\n> -Kevin\n>\nKevin\n\nGood question, I'm unclear what that was. I mean it's a LOG, so not a\nclient connection, that really kind of confused me. I don't normally\nsee EOF of client and an EOF on client from local, that's really\nreally weird\n\nTory\n",
"msg_date": "Fri, 26 Feb 2010 10:23:10 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
},
{
"msg_contents": "On Feb 26, 2010, at 11:23 AM, Tory M Blue wrote:\n\n> On Fri, Feb 26, 2010 at 5:09 AM, Kevin Grittner\n> <[email protected]> wrote:\n>> Tory M Blue wrote:\n>> \n>>> 2010-02-25 22:53:13 PST LOG: checkpoint starting: time\n>>> 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected\n>>> EOF on client connection\n>>> 2010-02-25 22:55:43 PST LOG: checkpoint complete: wrote 34155\n>>> buffers (17.8%); 0 transaction log file(s) added, 0 removed, 15\n>>> recycled; write=150.045 s, sync=0.000 s, total=150.046 s\n>> \n>> Did that unexpected EOF correspond to a connection attempt that gave\n>> up based on time?\n>> \n>> -Kevin\n>> \n> Kevin\n> \n> Good question, I'm unclear what that was. I mean it's a LOG, so not a\n> client connection, that really kind of confused me. I don't normally\n> see EOF of client and an EOF on client from local, that's really\n> really weird\n\nWe see that from our monitoring software testing port 5432.\n",
"msg_date": "Fri, 26 Feb 2010 11:25:31 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter, checkpoints, curious (seeing delays)"
}
] |
[
{
"msg_contents": "Hi, I'm having an issue where a postgres process is taking too much\nmemory when performing many consecutive inserts and updates from a PHP\nscript (running on the command line). I would like to know what sort\nof logging I can turn on to help me determine what is causing memory\nto be consumed and not released.\n\nMost PHP scripts are not long-running and properly releasing the\nresources using the provided functions in the pgsql PHP extension is\nnot necessary. However since I do have a long-running script, I have\ntaken steps to ensure everything is being properly released when it is\nno longer needed (I am calling the functions provided, but I don't\nknow if the pgsql extension is doing the right thing). In spite of\nthis, the longer the script runs and processes records, the more\nmemory increases. It increases to the point that system memory is\nexhausted and it starts swapping. I killed the process at this point.\n\nI monitored the memory with top. here are the results.. the first is\n10 seconds after my script started running. The second is about 26\nseconds.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ DATA COMMAND\n17461 postgres 16 0 572m 405m 14m S 20.0 10.7 0:10.65 422m postmaster\n17460 root 15 0 136m 14m 4632 S 10.6 0.4 0:06.16 10m php\n17462 postgres 15 0 193m 46m 3936 D 3.3 1.2 0:01.77 43m postmaster\n\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ DATA COMMAND\n17461 postgres 16 0 1196m 980m 17m S 19.0 26.0 0:25.72 1.0g postmaster\n17460 root 15 0 136m 14m 4632 R 10.3 0.4 0:14.31 10m php\n17462 postgres 16 0 255m 107m 3984 R 3.0 2.9 0:04.19 105m postmaster\n\n\nIf I am indeed doing everything I can to release the resources (and\nI'm 95% sure I am) then it looks like the pgsql extension is at fault\nhere.\nRegardless of who/what is at fault, I need to fix it. And to do that I\nneed to find out what isn't getting released properly. How would I go\nabout that?\n\nThanks,\nChris\n",
"msg_date": "Sat, 27 Feb 2010 15:29:13 -0700",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to troubleshoot high mem usage by postgres?"
},
{
"msg_contents": "On Feb 27, 2010, at 2:29 PM, Chris wrote:\n\n> Hi, I'm having an issue where a postgres process is taking too much\n> memory when performing many consecutive inserts and updates from a PHP\n\n[snip]\n\nIn your postgresql.conf file, what are the settings for work_mem and shared_buffers?\n\n\n\nOn Feb 27, 2010, at 2:29 PM, Chris wrote:Hi, I'm having an issue where a postgres process is taking too muchmemory when performing many consecutive inserts and updates from a PHP[snip]In your postgresql.conf file, what are the settings for work_mem and shared_buffers?",
"msg_date": "Sat, 27 Feb 2010 14:38:53 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to troubleshoot high mem usage by postgres?"
},
{
"msg_contents": "On 28/02/2010 6:29 AM, Chris wrote:\n\n> If I am indeed doing everything I can to release the resources (and\n> I'm 95% sure I am) then it looks like the pgsql extension is at fault\n> here.\n\nBefore assuming some particular thing is at fault, you need to collect \nsome information to determine what is actually happening.\n\nWhat are your queries?\n\nWhat are the resource-releasing functions you're using, and how?\n\nCan you boil this down to a simple PHP test-case that connects to a \ndummy database and repeats something that causes the backend to grow in \nmemory usage? Trying to do this - by progressively cutting things out of \nyour test until it stops growing - will help you track down what, \nexactly, is causing the growth.\n\nIt'd be helpful if you could also provide some general system \ninformation, as shown here:\n\n http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n--\nCraig Ringer\n",
"msg_date": "Sun, 28 Feb 2010 06:38:53 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to troubleshoot high mem usage by postgres?"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Hi, I'm having an issue where a postgres process is taking too much\n> memory when performing many consecutive inserts and updates from a PHP\n> script (running on the command line). I would like to know what sort\n> of logging I can turn on to help me determine what is causing memory\n> to be consumed and not released.\n\nAre you doing all these inserts/updates within a single transaction?\n\nIf so, I think the odds are very good that what's eating the memory\nis the list of pending trigger actions, resulting from either\nuser-created triggers or foreign-key check triggers. The best way\nof limiting the problem is to commit more often.\n\nIf you want to try to confirm that, what I would do is run the\npostmaster under a more restrictive ulimit setting, so that it\nruns out of memory sometime before the system starts to swap.\nWhen it does run out of memory, you'll get a memory context map\nprinted to postmaster stderr, and that will show which context\nis eating all the memory. If it's \"AfterTriggerEvents\" then my\nguess above is correct --- otherwise post the map for inspection.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Feb 2010 17:39:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to troubleshoot high mem usage by postgres? "
},
{
"msg_contents": "On Sat, Feb 27, 2010 at 3:38 PM, Ben Chobot <[email protected]> wrote:\n> In your postgresql.conf file, what are the settings for work_mem and\n> shared_buffers?\n\nI have not done any tuning on this db yet (it is a dev box). It is\nusing defaults.\nshared_buffers = 32MB\n#work_mem = 1MB\n\n\nI do appreciate the several quick responses and I will work on\nresponding to the them.\n\n@Craig Ringer:\nselect version() reports:\nPostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit\nThe system has 4GB of RAM.\nThe postgres log currently does not show any useful information. Only\nthing in there for today is an \"Unexpected EOF on client connection\"\nbecause I killed the process after it started swapping.\n\nThe test input for my PHP script is a csv file with about 450,000\nrecords in it. The php script processes the each csv record in a\ntransaction, and on average it executes 2 insert or update statements\nper record. I don't think the specific statements executed are\nrelevant (they are just basic INSERT and UPDATE statements).\n\nI will try to come up with a short script that reproduces the problem.\n\n@Tom Lane:\nAs I mentioned above I am not doing everything in a single\ntransaction. However I do want to try your suggestion regarding\ngetting a \"memory context map\". But I'm afraid I don't know how to do\nwhat you are describing. How can I set the ulimit of postmaster? And\ndoes the postmaster stderr output go to the postgres log file? If not,\nwhere can I find it?\n\nThanks again,\nChris\n",
"msg_date": "Sat, 27 Feb 2010 16:25:26 -0700",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to troubleshoot high mem usage by postgres?"
},
{
"msg_contents": "On Sat, Feb 27, 2010 at 3:38 PM, Craig Ringer\n<[email protected]> wrote:\n> Can you boil this down to a simple PHP test-case that connects to a dummy\n> database and repeats something that causes the backend to grow in memory\n> usage? Trying to do this - by progressively cutting things out of your test\n> until it stops growing - will help you track down what, exactly, is causing\n> the growth.\n\nThank you for your suggestion. I have done this, and in doing so I\nhave also discovered why this problem is occurring.\n\nMy application uses a class that abstracts away the db interaction, so\nI do not normally use the pg_* functions directly. Any time any\nstatement was executed, it created a new \"named\" prepared statement. I\nwrongly assumed that calling pg_free_result() on the statement\nresource would free this prepared statement inside of postgres.\n\nI will simply modify the class to use an empty statement name if there\nis no need for it to be named (which I actually need very infrequently\nanyway).\n\nI have attached the script I created to test with, for those who are\ninterested. The first line of the script has the connection string. I\nused a db called testdb. run from the command line with:\nphp -f test3.php\n\nNote my comment in the php file\n<<<<<< UNCOMMENT THIS LINE AND MEMORY ISSUE IS FIXED\n\nThanks for the help everyone.\nChris",
"msg_date": "Sat, 27 Feb 2010 18:17:57 -0700",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to troubleshoot high mem usage by postgres?"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> @Tom Lane:\n> As I mentioned above I am not doing everything in a single\n> transaction. However I do want to try your suggestion regarding\n> getting a \"memory context map\". But I'm afraid I don't know how to do\n> what you are describing. How can I set the ulimit of postmaster?\n\nDepends on the script you are using to start the postmaster. One way is\nto call ulimit in the startup script right before it invokes the\npostmaster. However, if you have something like\n\n\tsu - postgres -c \"postmaster ...\"\n\nthen I'd try putting it in the postgres user's ~/.profile or\n~/.bash_profile instead; the su is likely to reset such things.\n\n> And\n> does the postmaster stderr output go to the postgres log file?\n\nAlso depends. Look at the startup script and see where it redirects\npostmaster's stderr to. You might have to modify the script --- some\nare known to send stderr to /dev/null :-(\n\nSorry to be so vague, but different packagers have different ideas\nabout how to do this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Feb 2010 11:42:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to troubleshoot high mem usage by postgres? "
}
] |
[
{
"msg_contents": "All,\n\nI'm seeing in a production database two problems with query rowcount\nestimation:\n\n(1) Estimates for the number of rows in an outer join do not take into\naccount any constraint exclusion (CE) in operation.\n\n(2) Row estimates do not take into account if the unique indexes on the\nchild partitions are different from the master partition (the append\nnode). This is often true, because the key to the master is ( key, ce\ncolumn) and for the children is just ( key ).\n\nThe result is that if you do a series of outer joins using the CE\ncriterion against partitioned tables, the row estimates you get will be\nseveral orders of magnitude too high ... and the subsequent query plan\nfar too pessimistic.\n\nAnyone else seeing this? Do any of the 9.0 patches address the above\nissues?\n\n--Josh Berkus\n",
"msg_date": "Sun, 28 Feb 2010 12:19:51 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Estimation issue with partitioned tables"
},
{
"msg_contents": "On Sun, Feb 28, 2010 at 3:19 PM, Josh Berkus <[email protected]> wrote:\n> All,\n>\n> I'm seeing in a production database two problems with query rowcount\n> estimation:\n>\n> (1) Estimates for the number of rows in an outer join do not take into\n> account any constraint exclusion (CE) in operation.\n>\n> (2) Row estimates do not take into account if the unique indexes on the\n> child partitions are different from the master partition (the append\n> node). This is often true, because the key to the master is ( key, ce\n> column) and for the children is just ( key ).\n>\n> The result is that if you do a series of outer joins using the CE\n> criterion against partitioned tables, the row estimates you get will be\n> several orders of magnitude too high ... and the subsequent query plan\n> far too pessimistic.\n>\n> Anyone else seeing this? Do any of the 9.0 patches address the above\n> issues?\n\nI feel like I've seen these way-too-high row estimates in some other\npostings to -performance, but I'm not sure if it was the same issue.\nYou don't by chance have a RTC? I don't think it's likely fixed in 9.0\nbut it would be interesting to investigate.\n\n...Robert\n",
"msg_date": "Wed, 3 Mar 2010 12:12:40 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimation issue with partitioned tables"
},
{
"msg_contents": "\n> I feel like I've seen these way-too-high row estimates in some other\n> postings to -performance, but I'm not sure if it was the same issue.\n> You don't by chance have a RTC? I don't think it's likely fixed in 9.0\n> but it would be interesting to investigate.\n\nYeah, I can generate one pretty easily; the behavior is readily\nobservable and repeatable. Will get on it RSN, but at you said, we're\nnot doing anything about it for 9.0.\n\nI've a feeling that this will become one of the list of issues to be\nfixed with 'real partitioning'.\n\n--Josh Berkus\n",
"msg_date": "Wed, 03 Mar 2010 13:45:53 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Estimation issue with partitioned tables"
},
{
"msg_contents": "On Wed, Mar 3, 2010 at 4:45 PM, Josh Berkus <[email protected]> wrote:\n>> I feel like I've seen these way-too-high row estimates in some other\n>> postings to -performance, but I'm not sure if it was the same issue.\n>> You don't by chance have a RTC? I don't think it's likely fixed in 9.0\n>> but it would be interesting to investigate.\n>\n> Yeah, I can generate one pretty easily; the behavior is readily\n> observable and repeatable. Will get on it RSN, but at you said, we're\n> not doing anything about it for 9.0.\n>\n> I've a feeling that this will become one of the list of issues to be\n> fixed with 'real partitioning'.\n\nI can believe it. I can't promise anything, but like I say I'm\nwilling to take a poke at it if you can provide me a test case.\n\n...Robert\n",
"msg_date": "Wed, 3 Mar 2010 21:07:11 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimation issue with partitioned tables"
},
{
"msg_contents": "Robert,\n\n>> Yeah, I can generate one pretty easily; the behavior is readily\n>> observable and repeatable. Will get on it RSN, but at you said, we're\n>> not doing anything about it for 9.0.\n\nWell, I can generate a test case, but on examination it turns out to be\nnothing to do with partitioning; it's just good old \"n-distinct\nunderestimation\", and not really related to partitioning or joins.\n\n--Josh Berkus\n",
"msg_date": "Sun, 07 Mar 2010 09:57:26 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Estimation issue with partitioned tables"
},
{
"msg_contents": "On Sun, Mar 7, 2010 at 12:57 PM, Josh Berkus <[email protected]> wrote:\n>>> Yeah, I can generate one pretty easily; the behavior is readily\n>>> observable and repeatable. Will get on it RSN, but at you said, we're\n>>> not doing anything about it for 9.0.\n>\n> Well, I can generate a test case, but on examination it turns out to be\n> nothing to do with partitioning; it's just good old \"n-distinct\n> underestimation\", and not really related to partitioning or joins.\n\nAh. Well in 9.0 there will be a work around, by me as it happens. :-)\n\n...Robert\n",
"msg_date": "Mon, 8 Mar 2010 16:56:03 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimation issue with partitioned tables"
}
] |
[
{
"msg_contents": "Hi, hackers:\n\nI am testing the index used by full text search recently.\n\nI have install 8.3.9 and 8.4.2 separately. \n\nIn 8.3.9, the query plan is like:\n\npostgres=# explain SELECT s.name as source , t.name as target FROM element as s, element as t WHERE to_tsvector('testcfg',s.name) @@ to_tsquery('testcfg',replace(t.name,':','|')); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ \nNested Loop (cost=0.01..259.92 rows=491 width=18) \n -> Seq Scan on element t (cost=0.00..13.01 rows=701 width=9) \n -> Index Scan using element_ftsidx_test on element s (cost=0.01..0.33 rows=1 width=9) \n Index Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text)))\n(4 rows)\n\nI have index: \"element_ftsidx_test\" gin (to_tsvector('testcfg'::regconfig, name::text))\n\nThe same index and query in 8.4.2: \n\npostgres=# explain SELECT s.name as source , t.name as target FROM element as s, element as t WHERE to_tsvector('testcfg',s.name) @@ to_tsquery('testcfg',replace(t.name,':','|')) ; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------ \nNested Loop (cost=0.32..3123.51 rows=2457 width=18) \n -> Seq Scan on element t (cost=0.00..13.01 rows=701 width=9) \n -> Bitmap Heap Scan on element s (cost=0.32..4.36 rows=4 width=9) Recheck Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text))) \n -> Bitmap Index Scan on element_ftsidx_test (cost=0.00..0.32 rows=4 width=0)\n Index Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text)))\n(6 rows)\n\nWhy the query plans are different and why? Thanks!\n\nXu Fei\n\n\n \n\n",
"msg_date": "Sun, 28 Feb 2010 12:42:09 -0800 (PST)",
"msg_from": "xu fei <[email protected]>",
"msg_from_op": true,
"msg_subject": "full text search index scan query plan changed in 8.4.2?"
},
{
"msg_contents": "Xufei,\n\nList changed to psql-performance, which is where this discussion belongs.\n\n> I am testing the index used by full text search recently.\n> \n> I have install 8.3.9 and 8.4.2 separately. \n> \n> In 8.3.9, the query plan is like:\n> \n> postgres=# explain SELECT s.name as source , t.name as target FROM element as s, element as t WHERE to_tsvector('testcfg',s.name) @@ to_tsquery('testcfg',replace(t.name,':','|')); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ \n> Nested Loop (cost=0.01..259.92 rows=491 width=18) \n> -> Seq Scan on element t (cost=0.00..13.01 rows=701 width=9) \n> -> Index Scan using element_ftsidx_test on element s (cost=0.01..0.33 rows=1 width=9) \n> Index Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text)))\n> (4 rows)\n> \n> I have index: \"element_ftsidx_test\" gin (to_tsvector('testcfg'::regconfig, name::text))\n> \n> The same index and query in 8.4.2: \n> \n> postgres=# explain SELECT s.name as source , t.name as target FROM element as s, element as t WHERE to_tsvector('testcfg',s.name) @@ to_tsquery('testcfg',replace(t.name,':','|')) ; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------ \n> Nested Loop (cost=0.32..3123.51 rows=2457 width=18) \n> -> Seq Scan on element t (cost=0.00..13.01 rows=701 width=9) \n> -> Bitmap Heap Scan on element s (cost=0.32..4.36 rows=4 width=9) Recheck Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text))) \n> -> Bitmap Index Scan on element_ftsidx_test (cost=0.00..0.32 rows=4 width=0)\n> Index Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text)))\n> (6 rows)\n> \n> Why the query plans are different and why? Thanks!\n\nBecause the row estimates changed, since 8.4 improved row estimation for\nTSearch. The 2nd query is probably actually faster, no? If not, you\nmay need to increase your stats collection. Or at least show us a\nVACUUM ANALYZE.\n\n--Josh Berkus\n\n",
"msg_date": "Sun, 28 Feb 2010 14:41:46 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] full text search index scan query plan changed in\n 8.4.2?"
},
{
"msg_contents": "Josh Berkus wrote:\n> Xufei,\n> \n> List changed to psql-performance, which is where this discussion belongs.\n> \n>> I am testing the index used by full text search recently.\n>>\n>> I have install 8.3.9 and 8.4.2 separately. \n>>\n>> In 8.3.9, the query plan is like:\n>>\n>> postgres=# explain SELECT s.name as source , t.name as target FROM element as s, element as t WHERE to_tsvector('testcfg',s.name) @@ to_tsquery('testcfg',replace(t.name,':','|')); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ \n>> Nested Loop (cost=0.01..259.92 rows=491 width=18) \n>> -> Seq Scan on element t (cost=0.00..13.01 rows=701 width=9) \n>> -> Index Scan using element_ftsidx_test on element s (cost=0.01..0.33 rows=1 width=9) \n>> Index Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text)))\n>> (4 rows)\n>>\n>> I have index: \"element_ftsidx_test\" gin (to_tsvector('testcfg'::regconfig, name::text))\n>>\n>> The same index and query in 8.4.2: \n>>\n>> postgres=# explain SELECT s.name as source , t.name as target FROM element as s, element as t WHERE to_tsvector('testcfg',s.name) @@ to_tsquery('testcfg',replace(t.name,':','|')) ; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------ \n>> Nested Loop (cost=0.32..3123.51 rows=2457 width=18) \n>> -> Seq Scan on element t (cost=0.00..13.01 rows=701 width=9) \n>> -> Bitmap Heap Scan on element s (cost=0.32..4.36 rows=4 width=9) Recheck Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text))) \n>> -> Bitmap Index Scan on element_ftsidx_test (cost=0.00..0.32 rows=4 width=0)\n>> Index Cond: (to_tsvector('testcfg'::regconfig, (s.name)::text) @@ to_tsquery('testcfg'::regconfig, replace((t.name)::text, ':'::text, '|'::text)))\n>> (6 rows)\n>>\n>> Why the query plans are different and why? Thanks!\n> \n> Because the row estimates changed, since 8.4 improved row estimation for\n> TSearch. The 2nd query is probably actually faster, no? If not, you\n> may need to increase your stats collection. Or at least show us a\n> VACUUM ANALYZE.\n\nI'm sure you mean explain analyze :)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n",
"msg_date": "Mon, 01 Mar 2010 13:38:08 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] full text search index scan query plan changed in\n 8.4.2?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm quite puzzled by the following observation.\nThe behaviour is observed on a production system (Linux, PG 8.3.5)\nand also on a test system (NetBSD 5.0.2, PG 8.4.2).\n\nNormally the following Query behaves well:\n\nselect c.*, h.*\nfrom Context c, Context_Hierarchy h\nwhere c.Idx = h.ContextIdx and c.ContextId='testID' and h.HierarchyName='InsuranceHierarchy' and h.ParentIdx=49292395\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..43.57 rows=4 width=175) (actual time=0.291..0.293 rows=1 loops=1)\n -> Index Scan using uk_context_hierarchy_01 on context_hierarchy h (cost=0.00..14.76 rows=4 width=108) (actual time=0.169..0.169\nrows=1 loops=1)\n Index Cond: (((hierarchyname)::text = 'InsuranceHierarchy'::text) AND (parentidx = 49292395))\n -> Index Scan using pk_context on context c (cost=0.00..7.20 rows=1 width=67) (actual time=0.110..0.111 rows=1 loops=1)\n Index Cond: (c.idx = h.contextidx)\n Filter: ((c.contextid)::text = 'testID'::text)\n Total runtime: 0.388 ms\n(7 rows)\n\n(From a freshly started PG)\n\nHowever during a long term read-only transaction (actually just bout 15min)\n(the transaction is issuing about 10k-20k of such queries among others)\nPG is logging a number of the following:\n\nMar 1 09:58:09 gaia postgres[20126]: [25-1] LOG: 00000: duration: 343.663 ms execute S_5: select c.*, h.Idx as h_Idx, h.WbuIdx as\nh_WbuIdx, h.OrigWbuIdx as h_OrigWbuIdx, h.Ts as h_Ts, h.\nUserId as h_UserId, h.ParentIdx as h_ParentIdx, h.ContextIdx as h_ContextIdx, h.HierarchyName as h_HierarchyName, h.HierarchyPath as\nh_HierarchyPath from Context c, Context_Hierarchy h wher\ne c.Idx = h.ContextIdx and c.ContextId=$1 and h.HierarchyName=$2 and h.ParentIdx=$3\nMar 1 09:58:09 gaia postgres[20126]: [25-2] DETAIL: parameters: $1 = 'testID', $2 = 'InsuranceHierarchy', $3 = '49292395'\nMar 1 09:58:09 gaia postgres[20126]: [25-3] LOCATION: exec_execute_message, postgres.c:1988\n\n(About 200 in the current case.)\n\nThis is from the test system. The given transaction was the only activity on the system at that time.\n\nWhile the transaction was still active,\nI issued the query in parallel yielding the following plan (based on the logged message above):\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=0.00..43.57 rows=4 width=175) (actual time=21.809..21.811 rows=1 loops=1)\n -> Index Scan using uk_context_hierarchy_01 on context_hierarchy h (cost=0.00..14.76 rows=4 width=108) (actual\ntime=21.629..21.629 rows=1 loops=1)\n Index Cond: (((hierarchyname)::text = 'InsuranceHierarchy'::text) AND (parentidx = 49292395))\n -> Index Scan using pk_context on context c (cost=0.00..7.20 rows=1 width=67) (actual time=0.169..0.169 rows=1 loops=1)\n Index Cond: (c.idx = h.contextidx)\n Filter: ((c.contextid)::text = 'testID'::text)\n Total runtime: 22.810 ms\n(7 rows)\n\nThis still looks reasonable and is far from the >300ms as logged.\nAll this happens after the read-only transaction was active for a while.\n\n\nAny idea where to look for an explanation?\nOr what parameters could shed some light on the issue?\n\n\nRegards,\nRainer\n",
"msg_date": "Mon, 01 Mar 2010 13:50:29 +0100",
"msg_from": "Rainer Pruy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query slowing down significantly??"
},
{
"msg_contents": "Rainer Pruy <[email protected]> writes:\n> Normally the following Query behaves well:\n\n> select c.*, h.*\n> from Context c, Context_Hierarchy h\n> where c.Idx = h.ContextIdx and c.ContextId='testID' and h.HierarchyName='InsuranceHierarchy' and h.ParentIdx=49292395\n> ;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..43.57 rows=4 width=175) (actual time=0.291..0.293 rows=1 loops=1)\n> -> Index Scan using uk_context_hierarchy_01 on context_hierarchy h (cost=0.00..14.76 rows=4 width=108) (actual time=0.169..0.169\n> rows=1 loops=1)\n> Index Cond: (((hierarchyname)::text = 'InsuranceHierarchy'::text) AND (parentidx = 49292395))\n> -> Index Scan using pk_context on context c (cost=0.00..7.20 rows=1 width=67) (actual time=0.110..0.111 rows=1 loops=1)\n> Index Cond: (c.idx = h.contextidx)\n> Filter: ((c.contextid)::text = 'testID'::text)\n> Total runtime: 0.388 ms\n> (7 rows)\n\n> (From a freshly started PG)\n\n> However during a long term read-only transaction (actually just bout 15min)\n> (the transaction is issuing about 10k-20k of such queries among others)\n> PG is logging a number of the following:\n\n> Mar 1 09:58:09 gaia postgres[20126]: [25-1] LOG: 00000: duration: 343.663 ms execute S_5: select c.*, h.Idx as h_Idx, h.WbuIdx as\n> h_WbuIdx, h.OrigWbuIdx as h_OrigWbuIdx, h.Ts as h_Ts, h.\n> UserId as h_UserId, h.ParentIdx as h_ParentIdx, h.ContextIdx as h_ContextIdx, h.HierarchyName as h_HierarchyName, h.HierarchyPath as\n> h_HierarchyPath from Context c, Context_Hierarchy h wher\n> e c.Idx = h.ContextIdx and c.ContextId=$1 and h.HierarchyName=$2 and h.ParentIdx=$3\n> Mar 1 09:58:09 gaia postgres[20126]: [25-2] DETAIL: parameters: $1 = 'testID', $2 = 'InsuranceHierarchy', $3 = '49292395'\n> Mar 1 09:58:09 gaia postgres[20126]: [25-3] LOCATION: exec_execute_message, postgres.c:1988\n\nThat's not the same query at all, and it may not be getting the same\nplan. What you need to do to check the plan is to try PREPARE-ing\nand EXPLAIN EXECUTE-ing the query with the same parameter symbols\nas are actually used in the application-issued query.\n\nYou might be entertained by the recent thread on -hackers about\n\"Avoiding bad prepared-statement plans\" ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Mar 2010 11:15:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slowing down significantly?? "
},
{
"msg_contents": "Thanks for the hint.\nI should have been considering that in the first place.\n(But the obvious is easily left unrecognised..)\n\nThe prepared statement gives:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..25.18 rows=2 width=175) (actual time=36.116..49.998 rows=1 loops=1)\n -> Index Scan using x_context_01 on context c (cost=0.00..10.76 rows=2 width=67) (actual time=0.029..6.947 rows=12706 loops=1)\n Index Cond: ((contextid)::text = $1)\n -> Index Scan using x_fk_context_hierarchy_02 on context_hierarchy h (cost=0.00..7.20 rows=1 width=108) (actual time=0.003..0.003\nrows=0 loops=12706)\n Index Cond: (h.contextidx = c.idx)\n Filter: (((h.hierarchyname)::text = $2) AND (h.parentidx = $3))\n Total runtime: 50.064 ms\n(7 rows)\n\n\nAnd that is quite a bad plan given the current distribution of values.\n\nRegards,\nRainer\n\nAm 01.03.2010 17:15, schrieb Tom Lane:\n> Rainer Pruy <[email protected]> writes:\n>> Normally the following Query behaves well:\n> \n>> select c.*, h.*\n>> from Context c, Context_Hierarchy h\n>> where c.Idx = h.ContextIdx and c.ContextId='testID' and h.HierarchyName='InsuranceHierarchy' and h.ParentIdx=49292395\n>> ;\n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Nested Loop (cost=0.00..43.57 rows=4 width=175) (actual time=0.291..0.293 rows=1 loops=1)\n>> -> Index Scan using uk_context_hierarchy_01 on context_hierarchy h (cost=0.00..14.76 rows=4 width=108) (actual time=0.169..0.169\n>> rows=1 loops=1)\n>> Index Cond: (((hierarchyname)::text = 'InsuranceHierarchy'::text) AND (parentidx = 49292395))\n>> -> Index Scan using pk_context on context c (cost=0.00..7.20 rows=1 width=67) (actual time=0.110..0.111 rows=1 loops=1)\n>> Index Cond: (c.idx = h.contextidx)\n>> Filter: ((c.contextid)::text = 'testID'::text)\n>> Total runtime: 0.388 ms\n>> (7 rows)\n> \n>> (From a freshly started PG)\n> \n>> However during a long term read-only transaction (actually just bout 15min)\n>> (the transaction is issuing about 10k-20k of such queries among others)\n>> PG is logging a number of the following:\n> \n>> Mar 1 09:58:09 gaia postgres[20126]: [25-1] LOG: 00000: duration: 343.663 ms execute S_5: select c.*, h.Idx as h_Idx, h.WbuIdx as\n>> h_WbuIdx, h.OrigWbuIdx as h_OrigWbuIdx, h.Ts as h_Ts, h.\n>> UserId as h_UserId, h.ParentIdx as h_ParentIdx, h.ContextIdx as h_ContextIdx, h.HierarchyName as h_HierarchyName, h.HierarchyPath as\n>> h_HierarchyPath from Context c, Context_Hierarchy h wher\n>> e c.Idx = h.ContextIdx and c.ContextId=$1 and h.HierarchyName=$2 and h.ParentIdx=$3\n>> Mar 1 09:58:09 gaia postgres[20126]: [25-2] DETAIL: parameters: $1 = 'testID', $2 = 'InsuranceHierarchy', $3 = '49292395'\n>> Mar 1 09:58:09 gaia postgres[20126]: [25-3] LOCATION: exec_execute_message, postgres.c:1988\n> \n> That's not the same query at all, and it may not be getting the same\n> plan. What you need to do to check the plan is to try PREPARE-ing\n> and EXPLAIN EXECUTE-ing the query with the same parameter symbols\n> as are actually used in the application-issued query.\n> \n> You might be entertained by the recent thread on -hackers about\n> \"Avoiding bad prepared-statement plans\" ...\n> \n> \t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 01 Mar 2010 18:48:31 +0100",
"msg_from": "Rainer Pruy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query slowing down significantly??"
},
{
"msg_contents": "Rainer Pruy <[email protected]> writes:\n> The prepared statement gives:\n> ...\n> And that is quite a bad plan given the current distribution of values.\n\nYeah. The planner really needs to know the actual parameter values in\norder to pick the best plan for this case.\n\nOne thing that you might be able to do to avoid giving up on prepared\nstatements entirely is to use an \"unnamed\" rather than named prepared\nstatement here. That will lead to the query plan being prepared only\nwhen the parameter values are made available, rather than in advance.\nIt'd depend on what client library you're using whether this is a simple\nchange or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Mar 2010 13:15:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slowing down significantly?? "
},
{
"msg_contents": "I'm already at it\n\nIt is a Java app, using jdbc, but through a proprietary persistence framework.\nI'm just busy evaluating the effects on the app of prohibiting prepared statements via jdbc.\nIf this is not worthwhile, I'm bound to some expensive reorganizations, sigh.\n\nNevertheless,\nthanks for your help\nin reminding me about obvious use of prepared statements.\n\nRainer\n\nPS:\nI've just read the thread on \"Avoiding bad prepared-statement plans\".\nVery interesting. Will track this...\n\n\nAm 01.03.2010 19:15, wrote Tom Lane:\n> Rainer Pruy <[email protected]> writes:\n>> The prepared statement gives:\n>> ...\n>> And that is quite a bad plan given the current distribution of values.\n> \n> Yeah. The planner really needs to know the actual parameter values in\n> order to pick the best plan for this case.\n> \n> One thing that you might be able to do to avoid giving up on prepared\n> statements entirely is to use an \"unnamed\" rather than named prepared\n> statement here. That will lead to the query plan being prepared only\n> when the parameter values are made available, rather than in advance.\n> It'd depend on what client library you're using whether this is a simple\n> change or not.\n> \n> \t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Mar 2010 19:49:07 +0100",
"msg_from": "Rainer Pruy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query slowing down significantly??"
},
{
"msg_contents": "Rainer Pruy wrote:\n> Thanks for the hint.\n> I should have been considering that in the first place.\n> (But the obvious is easily left unrecognised..)\n>\n> The prepared statement gives:\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..25.18 rows=2 width=175) (actual time=36.116..49.998 rows=1 loops=1)\n> -> Index Scan using x_context_01 on context c (cost=0.00..10.76 rows=2 width=67) (actual time=0.029..6.947 rows=12706 loops=1)\n> Index Cond: ((contextid)::text = $1)\n> -> Index Scan using x_fk_context_hierarchy_02 on context_hierarchy h (cost=0.00..7.20 rows=1 width=108) (actual time=0.003..0.003\n> rows=0 loops=12706)\n> Index Cond: (h.contextidx = c.idx)\n> Filter: (((h.hierarchyname)::text = $2) AND (h.parentidx = $3))\n> Total runtime: 50.064 ms\n> (7 rows)\n>\n>\n> And that is quite a bad plan given the current distribution of values.\n> \nAnother approach might be to rewrite recursion into your hierarchy with \nthe in 8.4 new WITH RECURSIVE option in sql queries. The possible gains \nthere are way beyond anything you can accomplish with optimizing \nrecursive functions.\n\nRegards,\nYeb Havinga\n\n",
"msg_date": "Mon, 01 Mar 2010 20:34:38 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slowing down significantly??"
},
{
"msg_contents": "\n\nOn Mon, 1 Mar 2010, Rainer Pruy wrote:\n\n> It is a Java app, using jdbc, but through a proprietary persistence \n> framework. I'm just busy evaluating the effects on the app of \n> prohibiting prepared statements via jdbc. If this is not worthwhile, I'm \n> bound to some expensive reorganizations, sigh.\n\nYou can disable the named statement by adding the parameter \nprepareThreshold=0 to your connection URL.\n\nKris Jurka\n",
"msg_date": "Tue, 2 Mar 2010 17:46:19 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slowing down significantly??"
}
] |
[
{
"msg_contents": "When I use intervals in my query e.g col1 between current_timestamp -\ninterval '10 days' and current_timestamp...the optimizer checks ALL\npartitions whereas if I use col1 between 2 hardcoded dates..only\nthe applicable partitions are scanned.\n",
"msg_date": "Mon, 1 Mar 2010 11:29:26 -0800",
"msg_from": "Anj Adu <[email protected]>",
"msg_from_op": true,
"msg_subject": "partition pruning"
},
{
"msg_contents": "On Mon, Mar 1, 2010 at 2:29 PM, Anj Adu <[email protected]> wrote:\n> When I use intervals in my query e.g col1 between current_timestamp -\n> interval '10 days' and current_timestamp...the optimizer checks ALL\n> partitions whereas if I use col1 between 2 hardcoded dates..only\n> the applicable partitions are scanned.\n\nYep. This is one example of a more general principle:\nconstant-folding happens before planning, but anything more complex\nhas to wait until execution time. So the plan can't take into account\nthe value of current_timestamp in forming the plan.\n\nUnfortunately I don't think there's really any easy way around this:\nyou have to do select current_timestamp, current_timestamp - interval\n'10 days' first and then build & execute a new query.\n\n...Robert\n",
"msg_date": "Thu, 4 Mar 2010 17:40:02 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning"
},
{
"msg_contents": "On Thu, 2010-03-04 at 17:40 -0500, Robert Haas wrote:\n> On Mon, Mar 1, 2010 at 2:29 PM, Anj Adu <[email protected]> wrote:\n> > When I use intervals in my query e.g col1 between current_timestamp -\n> > interval '10 days' and current_timestamp...the optimizer checks ALL\n> > partitions whereas if I use col1 between 2 hardcoded dates..only\n> > the applicable partitions are scanned.\n> \n> Yep. This is one example of a more general principle:\n> constant-folding happens before planning, but anything more complex\n> has to wait until execution time. So the plan can't take into account\n> the value of current_timestamp in forming the plan.\n\nIt could, but it doesn't yet. Partition removal can take place in the\nexecutor and this is currently targeted for 9.1.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 08 Mar 2010 22:39:10 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition pruning"
}
] |
[
{
"msg_contents": "Anyone has any experience doing analytics with postgres. In particular if \n10K rpm drives are good enough vs using 15K rpm, over 24 drives. Price \ndifference is $3,000.\n\nRarely ever have more than 2 or 3 connections to the machine.\n\nSo far from what I have seen throughput is more important than TPS for the \nqueries we do. Usually we end up doing sequential scans to do \nsummaries/aggregates.\n",
"msg_date": "Tue, 02 Mar 2010 15:42:37 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "10K vs 15k rpm for analytics"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Anyone has any experience doing analytics with postgres. In particular \n> if 10K rpm drives are good enough vs using 15K rpm, over 24 drives. \n> Price difference is $3,000.\n>\n> Rarely ever have more than 2 or 3 connections to the machine.\n>\n> So far from what I have seen throughput is more important than TPS for \n> the queries we do. Usually we end up doing sequential scans to do \n> summaries/aggregates.\n>\nWith 24 drives it'll probably be the controller that is the limiting \nfactor of bandwidth. Our HP SAN controller with 28 15K drives delivers \n170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. So I'd \ngo for the 10K drives and put the saved money towards the controller (or \nmaybe more than one controller).\n\nregards,\nYeb Havinga\n",
"msg_date": "Tue, 02 Mar 2010 21:51:56 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Yeb Havinga wrote:\n> With 24 drives it'll probably be the controller that is the limiting \n> factor of bandwidth. Our HP SAN controller with 28 15K drives delivers \n> 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. \n\nYou should be able to clear 1GB/s on sequential reads with 28 15K drives \nin a RAID10, given proper read-ahead adjustment. I get over 200MB/s out \nof the 3-disk RAID0 on my home server without even trying hard. Can you \nshare what HP SAN controller you're using?\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 02 Mar 2010 16:03:20 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Seconded .... these days even a single 5400rpm SATA drive can muster almost\n100MB/sec on a sequential read.\n\nThe benefit of 15K rpm drives is seen when you have a lot of small, random\naccesses from a working set that is too big to cache .... the extra\nrotational speed translates to an average reduction of about 1ms on a random\nseek and read from the media.\n\nCheers\nDave\n\nOn Tue, Mar 2, 2010 at 2:51 PM, Yeb Havinga <[email protected]> wrote:\n\n> Francisco Reyes wrote:\n>\n>> Anyone has any experience doing analytics with postgres. In particular if\n>> 10K rpm drives are good enough vs using 15K rpm, over 24 drives. Price\n>> difference is $3,000.\n>>\n>> Rarely ever have more than 2 or 3 connections to the machine.\n>>\n>> So far from what I have seen throughput is more important than TPS for the\n>> queries we do. Usually we end up doing sequential scans to do\n>> summaries/aggregates.\n>>\n>> With 24 drives it'll probably be the controller that is the limiting\n> factor of bandwidth. Our HP SAN controller with 28 15K drives delivers\n> 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. So I'd go\n> for the 10K drives and put the saved money towards the controller (or maybe\n> more than one controller).\n>\n> regards,\n> Yeb Havinga\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSeconded .... these days even a single 5400rpm SATA drive can muster almost 100MB/sec on a sequential read.The benefit of 15K rpm drives is seen when you have a lot of small, random accesses from a working set that is too big to cache .... the extra rotational speed translates to an average reduction of about 1ms on a random seek and read from the media.\nCheersDaveOn Tue, Mar 2, 2010 at 2:51 PM, Yeb Havinga <[email protected]> wrote:\nFrancisco Reyes wrote:\n\nAnyone has any experience doing analytics with postgres. In particular if 10K rpm drives are good enough vs using 15K rpm, over 24 drives. Price difference is $3,000.\n\nRarely ever have more than 2 or 3 connections to the machine.\n\nSo far from what I have seen throughput is more important than TPS for the queries we do. Usually we end up doing sequential scans to do summaries/aggregates.\n\n\nWith 24 drives it'll probably be the controller that is the limiting factor of bandwidth. Our HP SAN controller with 28 15K drives delivers 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. So I'd go for the 10K drives and put the saved money towards the controller (or maybe more than one controller).\n\nregards,\nYeb Havinga\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 2 Mar 2010 15:05:07 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 1:42 PM, Francisco Reyes <[email protected]> wrote:\n> Anyone has any experience doing analytics with postgres. In particular if\n> 10K rpm drives are good enough vs using 15K rpm, over 24 drives. Price\n> difference is $3,000.\n>\n> Rarely ever have more than 2 or 3 connections to the machine.\n>\n> So far from what I have seen throughput is more important than TPS for the\n> queries we do. Usually we end up doing sequential scans to do\n> summaries/aggregates.\n\nThen the real thing to compare is the speed of the drives for\nthroughput not rpm. Using older 15k drives would actually be slower\nthan some more modern 10k or even 7.2k drives.\n\nAnother issue would be whether or not to short stroke the drives. You\nmay find that short stroked 10k drives provide the same throughput for\nmuch less money. The 10krpm 2.5\" ultrastar C10K300 drives have a\nthroughput numbers of 143 to 88 Meg/sec, which is quite respectable,\nand you can put 24 into a 2U supermicro case and save rack space too.\nThe 15k 2.5\" ultrastar c15k147 drives are 159 to 116, only a bit\nfaster. And if short stroked the 10k drives should be competitive.\n",
"msg_date": "Tue, 2 Mar 2010 14:12:12 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, 2 Mar 2010, Francisco Reyes wrote:\n\n> Anyone has any experience doing analytics with postgres. In particular if 10K \n> rpm drives are good enough vs using 15K rpm, over 24 drives. Price difference \n> is $3,000.\n>\n> Rarely ever have more than 2 or 3 connections to the machine.\n>\n> So far from what I have seen throughput is more important than TPS for the \n> queries we do. Usually we end up doing sequential scans to do \n> summaries/aggregates.\n\nWith sequential scans you may be better off with the large SATA drives as \nthey fit more data per track and so give great sequential read rates.\n\nif you end up doing a lot of seeking to retreive the data, you may find \nthat you get a benifit from the faster drives.\n\nDavid Lang\n",
"msg_date": "Tue, 2 Mar 2010 13:14:52 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 2:14 PM, <[email protected]> wrote:\n> On Tue, 2 Mar 2010, Francisco Reyes wrote:\n>\n>> Anyone has any experience doing analytics with postgres. In particular if\n>> 10K rpm drives are good enough vs using 15K rpm, over 24 drives. Price\n>> difference is $3,000.\n>>\n>> Rarely ever have more than 2 or 3 connections to the machine.\n>>\n>> So far from what I have seen throughput is more important than TPS for the\n>> queries we do. Usually we end up doing sequential scans to do\n>> summaries/aggregates.\n>\n> With sequential scans you may be better off with the large SATA drives as\n> they fit more data per track and so give great sequential read rates.\n\nTrue, I just looked at the Hitachi 7200 RPM 2TB Ultrastar and it lists\nand average throughput of 134 Megabytes/second which is quite good.\nWhile seek time is about double that of a 15krpm drive, short stroking\ncan lower that quite a bit. Latency is still 2x as much, but there's\nnot much to do about that.\n",
"msg_date": "Tue, 2 Mar 2010 14:21:24 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Yeb Havinga writes:\n\n> With 24 drives it'll probably be the controller that is the limiting \n> factor of bandwidth.\n\nGoing with a 3Ware SAS controller.\n\n> Our HP SAN controller with 28 15K drives delivers \n> 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. \n\n\nAlready have simmilar machine in house.\nWith RAID 1+0 Bonne++ reports around 400MB/sec sequential read.\n\n> go for the 10K drives and put the saved money towards the controller (or \n> maybe more than one controller).\n\nHave some external enclosures with 16 15Krpm drives. They are older 15K \nrpms, but they should be good enough. \n\nSince the 15K rpms usually have better Transanctions per second I will \nput WAL and indexes in the external enclosure.\n",
"msg_date": "Tue, 02 Mar 2010 16:28:35 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe writes:\n\n> Then the real thing to compare is the speed of the drives for\n> throughput not rpm.\n\nIn a machine, simmilar to what I plan to buy, already in house 24 x 10K rpm \ngives me about 400MB/sec while 16 x 15K rpm (2 to 3 year old drives) gives \nme about 500MB/sec\n\n",
"msg_date": "Tue, 02 Mar 2010 16:30:29 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 1:51 PM, Yeb Havinga <[email protected]> wrote:\n> With 24 drives it'll probably be the controller that is the limiting factor\n> of bandwidth. Our HP SAN controller with 28 15K drives delivers 170MB/s at\n> maximum with raid 0 and about 155MB/s with raid 1+0. So I'd go for the 10K\n> drives and put the saved money towards the controller (or maybe more than\n> one controller).\n\nThat's horrifically bad numbers for that many drives. I can get those\nnumbers for write performance on a RAID-6 on our office server. I\nwonder what's making your SAN setup so slow?\n",
"msg_date": "Tue, 2 Mar 2010 14:34:49 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "[email protected] writes:\n\n> With sequential scans you may be better off with the large SATA drives as \n> they fit more data per track and so give great sequential read rates.\n\nI lean more towards SAS because of writes.\nOne common thing we do is create temp tables.. so a typical pass may be:\n* sequential scan\n* create temp table with subset\n* do queries against subset+join to smaller tables.\n\nI figure the concurrent read/write would be faster on SAS than on SATA. I am \ntrying to move to having an external enclosure (we have several not in use \nor about to become free) so I could separate the read and the write of the \ntemp tables.\n\nLastly, it is likely we are going to do horizontal partitioning (ie master \nall data in one machine, replicate and then change our code to read parts of \ndata from different machine) and I think at that time the better drives will \ndo better as we have more concurrent queries.\n\n",
"msg_date": "Tue, 02 Mar 2010 16:36:21 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Greg Smith writes:\n\n> in a RAID10, given proper read-ahead adjustment. I get over 200MB/s out \n> of the 3-disk RAID0\n\nAny links/suggested reads on read-ahead adjustment? It will probably be OS \ndependant, but any info would be usefull.\n\n",
"msg_date": "Tue, 02 Mar 2010 16:44:10 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 2:30 PM, Francisco Reyes <[email protected]> wrote:\n> Scott Marlowe writes:\n>\n>> Then the real thing to compare is the speed of the drives for\n>> throughput not rpm.\n>\n> In a machine, simmilar to what I plan to buy, already in house 24 x 10K rpm\n> gives me about 400MB/sec while 16 x 15K rpm (2 to 3 year old drives) gives\n> me about 500MB/sec\n\nHave you tried short stroking the drives to see how they compare then?\n Or is the reduced primary storage not a valid path here?\n\nWhile 16x15k older drives doing 500Meg seems only a little slow, the\n24x10k drives getting only 400MB/s seems way slow. I'd expect a\nRAID-10 of those to read at somewhere in or just past the gig per\nsecond range with a fast pcie (x8 or x16 or so) controller. You may\nfind that a faster controller with only 8 or so fast and large SATA\ndrives equals the 24 10k drives you're looking at now. I can write at\nabout 300 to 350 Megs a second on a slower Areca 12xx series\ncontroller and 8 2TB Western Digital Green drives, which aren't even\nmade for speed.\n",
"msg_date": "Tue, 2 Mar 2010 14:47:25 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Greg Smith writes:\n\n> in a RAID10, given proper read-ahead adjustment. I get over 200MB/s out \n> of the 3-disk RAID0 on my home server without even trying hard. Can you \n\nAny links/suggested reading on \"read-ahead adjustment\". I understand this \nmay be OS specific, but any info would be helpfull.\n\nCurrently have 24 x 10K rpm drives and only getting about 400MB/sec.\n",
"msg_date": "Tue, 02 Mar 2010 16:53:09 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe writes:\n\n> Have you tried short stroking the drives to see how they compare then?\n> Or is the reduced primary storage not a valid path here?\n\nNo, have not tried it. By the time I got the machine we needed it in \nproduction so could not test anything.\n\nWhen the 2 new machines come I should hopefully have time to try a few \nstrategies, including RAID0, to see what is the best setup for our needs.\n \n> RAID-10 of those to read at somewhere in or just past the gig per\n> second range with a fast pcie (x8 or x16 or so) controller.\n\nThanks for the info. Contacted the vendor to see what pcie speed is the \ncontroller connected to, specially since we are considering getting 2 more \nmachines from them.\n\n> drives equals the 24 10k drives you're looking at now. I can write at\n> about 300 to 350 Megs a second on a slower Areca 12xx series\n> controller and 8 2TB Western Digital Green drives, which aren't even\n\nHow about read spead?\n",
"msg_date": "Tue, 02 Mar 2010 17:04:02 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, 2 Mar 2010, Scott Marlowe wrote:\n\n> On Tue, Mar 2, 2010 at 2:30 PM, Francisco Reyes <[email protected]> wrote:\n>> Scott Marlowe writes:\n>>\n>>> Then the real thing to compare is the speed of the drives for\n>>> throughput not rpm.\n>>\n>> In a machine, simmilar to what I plan to buy, already in house 24 x 10K rpm\n>> gives me about 400MB/sec while 16 x 15K rpm (2 to 3 year old drives) gives\n>> me about 500MB/sec\n>\n> Have you tried short stroking the drives to see how they compare then?\n> Or is the reduced primary storage not a valid path here?\n>\n> While 16x15k older drives doing 500Meg seems only a little slow, the\n> 24x10k drives getting only 400MB/s seems way slow. I'd expect a\n> RAID-10 of those to read at somewhere in or just past the gig per\n> second range with a fast pcie (x8 or x16 or so) controller. You may\n> find that a faster controller with only 8 or so fast and large SATA\n> drives equals the 24 10k drives you're looking at now. I can write at\n> about 300 to 350 Megs a second on a slower Areca 12xx series\n> controller and 8 2TB Western Digital Green drives, which aren't even\n> made for speed.\n\nwhat filesystem is being used. There is a thread on the linux-kernel \nmailing list right now showing that ext4 seems to top out at ~360MB/sec \nwhile XFS is able to go to 500MB/sec+\n\non single disks the disk performance limits you, but on arrays where the \ndisk performance is higher there may be other limits you are running into.\n\nDavid Lang\n",
"msg_date": "Tue, 2 Mar 2010 14:10:05 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "[email protected] writes:\n\n> what filesystem is being used. There is a thread on the linux-kernel \n> mailing list right now showing that ext4 seems to top out at ~360MB/sec \n> while XFS is able to go to 500MB/sec+\n\nEXT3 on Centos 5.4\n\nPlan to try and see if I have time with the new machines to try FreeBSD+ZFS. \nZFS supposedly makes good use of memory and the new machines will have 72GB \nof RAM.\n",
"msg_date": "Tue, 02 Mar 2010 17:26:54 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe writes:\n\n> While 16x15k older drives doing 500Meg seems only a little slow, the\n> 24x10k drives getting only 400MB/s seems way slow. I'd expect a\n> RAID-10 of those to read at somewhere in or just past the gig per\n\nTalked to the vendor. The likely issue is the card. They used a single card \nwith an expander and the card also has an external enclosure through an \nexterrnal port.\n\nThey have some ideas which they are going to test and report back.. since we \nare in the process of getting 2 more machines from them.\n\nThey believe that by splitting the internal drives into one controller and \nthe external into a second controller that performance should go up. They \nwill report back some numbers. Will post them to the list when I get the \ninfo.\n",
"msg_date": "Tue, 02 Mar 2010 18:38:22 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Going with a 3Ware SAS controller.\n> Already have simmilar machine in house.\n> With RAID 1+0 Bonne++ reports around 400MB/sec sequential read.\n\nIncrease read-ahead and I'd bet you can add 50% to that easy--one area \nthe 3Ware controllers need serious help, as they admit: \nhttp://www.3ware.com/kb/article.aspx?id=11050 Just make sure you ignore \ntheir dirty_ratio comments--those are completely the opposite of what \nyou want on a database app. Still seems on the low side though.\n\nShort stroke and you could probably chop worst-case speeds in half too, \non top of that.\n\nNote that 3Ware's controllers have seriously limited reporting on drive \ndata when using SAS drives because they won't talk SMART to them: \nhttp://www.3ware.com/KB/Article.aspx?id=15383 I consider them still a \nuseful vendor for SATA controllers, but would never buy a SAS solution \nfrom them again until this is resolved.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 02 Mar 2010 18:44:20 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Anyone has any experience doing analytics with postgres. In particular \n> if 10K rpm drives are good enough vs using 15K rpm, over 24 drives. \n> Price difference is $3,000.\n> Rarely ever have more than 2 or 3 connections to the machine.\n> So far from what I have seen throughput is more important than TPS for \n> the queries we do. Usually we end up doing sequential scans to do \n> summaries/aggregates.\n\nFor arrays this size, the first priority is to sort out what controller \nyou're going to get, whether it can keep up with the array size, and how \nyou're going to support/monitor it. Once you've got all that nailed \ndown, if you still have the option of 10K vs. 15K the trade-offs are \npretty simple:\n\n-10K drives are cheaper\n-15K drives will commit and seek faster. If you have a battery-backed \ncontroller, commit speed doesn't matter very much.\n\nIf you only have 2 or 3 connections, I can't imagine that the improved \nseek times of the 15K drives will be a major driving factor. As already \nsuggested, 10K drives tend to be larger and can be extremely fast on \nsequential workloads, particularly if you short-stroke them and stick to \nputting the important stuff on the fast part of the disk.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 02 Mar 2010 18:50:02 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 4:50 PM, Greg Smith <[email protected]> wrote:\n> If you only have 2 or 3 connections, I can't imagine that the improved seek\n> times of the 15K drives will be a major driving factor. As already\n> suggested, 10K drives tend to be larger and can be extremely fast on\n> sequential workloads, particularly if you short-stroke them and stick to\n> putting the important stuff on the fast part of the disk.\n\nThe thing I like most about short stroking 7200RPM 1 to 2 TB drives is\nthat you get great performance on one hand, and a ton of left over\nstorage for backups and stuff. And honestly, you can't have enough\nextra storage laying about when working on databases.\n",
"msg_date": "Tue, 2 Mar 2010 16:56:46 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe wrote:\n> True, I just looked at the Hitachi 7200 RPM 2TB Ultrastar and it lists\n> and average throughput of 134 Megabytes/second which is quite good.\n> \n\nYeah, but have you tracked the reliability of any of the 2TB drives out \nthere right now? They're terrible. I wouldn't deploy anything more \nthan a 1TB drive right now in a server, everything with a higher \ncapacity is still on the \"too new to be stable yet\" side of the fence to me.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 02 Mar 2010 18:57:19 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 4:57 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> True, I just looked at the Hitachi 7200 RPM 2TB Ultrastar and it lists\n>> and average throughput of 134 Megabytes/second which is quite good.\n>>\n>\n> Yeah, but have you tracked the reliability of any of the 2TB drives out\n> there right now? They're terrible. I wouldn't deploy anything more than a\n> 1TB drive right now in a server, everything with a higher capacity is still\n> on the \"too new to be stable yet\" side of the fence to me.\n\nWe've had REAL good luck with the WD green and black drives. Out of\nabout 35 or so drives we've had two failures in the last year, one of\neach black and green. The Seagate SATA drives have been horrific for\nus, with a 30% failure rate in the last 8 or so months. We only have\nsomething like 10 of the Seagates, so the sample's not as big as the\nWDs. Note that we only use the supposed \"enterprise\" class drives\nfrom each manufacturer.\n\nWe just got a shipment of 8 1.5TB Seagates so I'll keep you informed\nof the failure rate of those drives. Wouldn't be surprised to see 1\nor 2 die in the first few months tho.\n",
"msg_date": "Tue, 2 Mar 2010 17:36:35 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe wrote:\n> We've had REAL good luck with the WD green and black drives. Out of\n> about 35 or so drives we've had two failures in the last year, one of\n> each black and green.\n\nI've been happy with almost all the WD Blue drives around here (have \nabout a dozen in service for around two years), with the sole exception \nthat the one drive I did have go bad has turned into a terrible liar. \nRefuses to either acknowledge it's broken and produce an RMA code, or to \nwork. At least the Seagate and Hitachi drives are honest about being \nborked when once they've started producing heavy SMART errors. I have \nenough redundancy to deal with failure, but can't tolerate dishonesty \none bit.\n\nThe Blue drives are of course regular crappy consumer models though, so \nthis is not necessarily indicative of how the Green/Black drives work.\n\n> The Seagate SATA drives have been horrific for\n> us, with a 30% failure rate in the last 8 or so months. We only have\n> something like 10 of the Seagates, so the sample's not as big as the\n> WDs. Note that we only use the supposed \"enterprise\" class drives\n> from each manufacturer.\n>\n> We just got a shipment of 8 1.5TB Seagates so I'll keep you informed\n> of the failure rate of those drives. Wouldn't be surprised to see 1\n> or 2 die in the first few months tho.\n> \n\nGood luck with those--the consumer version of Seagate's 1.5TB drives \nhave been perhaps the worst single drive model on the market over the \nlast year. Something got seriously misplaced when they switched their \nmanufacturing facility from Singapore to Thailand a few years ago, and \nnow that the old plant is gone: \nhttp://www.theregister.co.uk/2009/08/04/seagate_closing_singapore_plant/ \nI don't expect them to ever recover from that. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 02 Mar 2010 20:03:39 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 6:03 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> We've had REAL good luck with the WD green and black drives. Out of\n>> about 35 or so drives we've had two failures in the last year, one of\n>> each black and green.\n>\n> I've been happy with almost all the WD Blue drives around here (have about a\n> dozen in service for around two years), with the sole exception that the one\n> drive I did have go bad has turned into a terrible liar. Refuses to either\n> acknowledge it's broken and produce an RMA code, or to work. At least the\n> Seagate and Hitachi drives are honest about being borked when once they've\n> started producing heavy SMART errors. I have enough redundancy to deal with\n> failure, but can't tolerate dishonesty one bit.\n\nTime to do the ESD shuffle I think.\n\n>> The Seagate SATA drives have been horrific for\n>> us, with a 30% failure rate in the last 8 or so months. We only have\n>> something like 10 of the Seagates, so the sample's not as big as the\n>> WDs. Note that we only use the supposed \"enterprise\" class drives\n>> from each manufacturer.\n>>\n>> We just got a shipment of 8 1.5TB Seagates so I'll keep you informed\n>> of the failure rate of those drives. Wouldn't be surprised to see 1\n>> or 2 die in the first few months tho.\n>>\n>\n> Good luck with those--the consumer version of Seagate's 1.5TB drives have\n> been perhaps the worst single drive model on the market over the last year.\n> Something got seriously misplaced when they switched their manufacturing\n> facility from Singapore to Thailand a few years ago, and now that the old\n> plant is gone:\n> http://www.theregister.co.uk/2009/08/04/seagate_closing_singapore_plant/ I\n> don't expect them to ever recover from that.\n\nYeah, I've got someone upstream in my chain of command who's a huge\nfan of seacrates, so that's how we got those 1.5TB drives. Our 15k5\nseagates have been great, with 2 failures in 32 drives in 1.5 years of\nvery heavy use. All our seagate SATAs, whether 500G or 2TB have been\nthe problem children. I've pretty much given up on Seagate SATA\ndrives. The new seagates we got are the consumer 7200.11 drives, but\nat least they have the latest firmware and all.\n",
"msg_date": "Tue, 2 Mar 2010 18:17:47 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe wrote:\n> Time to do the ESD shuffle I think.\n> \n\nNah, I keep the crazy drive around as an interesting test case. Fun to \nsee what happens when I connect to a RAID card; very informative about \nhow thorough the card's investigation of the drive is.\n\n> Our 15k5\n> seagates have been great, with 2 failures in 32 drives in 1.5 years of\n> very heavy use. All our seagate SATAs, whether 500G or 2TB have been\n> the problem children. I've pretty much given up on Seagate SATA\n> drives. The new seagates we got are the consumer 7200.11 drives, but\n> at least they have the latest firmware and all.\n> \n\nWell, what I was pointing out was that all the 15K drives used to come \nout of this plant in Singapore, which is also where their good consumer \ndrives used to come from too during the 2003-2007ish period where all \ntheir products were excellent. Then they moved the consumer production \nto this new location in Thailand, and all of the drives from there have \nbeen total junk. And as of August they closed the original plant, which \nhad still been making the enterprise drives, altogether. So now you can \nexpect the 15K drives to come from the same known source of garbage \ndrives as everything else they've made recently, rather than the old, \nreliable plant.\n\nI recall the Singapore plant sucked for a while when it got started in \nthe mid 90's too, so maybe this Thailand one will eventually get their \nissues sorted out. It seems like you can't just move a hard drive plant \nsomewhere and have the new one work without a couple of years of \npractice first, I keep seeing this pattern repeat.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 02 Mar 2010 21:19:43 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Greg Smith writes:\n\n> http://www.3ware.com/KB/Article.aspx?id=15383 I consider them still a \n> useful vendor for SATA controllers, but would never buy a SAS solution \n> from them again until this is resolved.\n\n\nWho are you using for SAS?\nOne thing I like about 3ware is their management utility works under both \nFreeBSD and Linux well.\n",
"msg_date": "Tue, 02 Mar 2010 21:44:02 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, Mar 2, 2010 at 7:44 PM, Francisco Reyes <[email protected]> wrote:\n> Greg Smith writes:\n>\n>> http://www.3ware.com/KB/Article.aspx?id=15383 I consider them still a\n>> useful vendor for SATA controllers, but would never buy a SAS solution from\n>> them again until this is resolved.\n>\n>\n> Who are you using for SAS?\n> One thing I like about 3ware is their management utility works under both\n> FreeBSD and Linux well.\n\nThe non-open source nature of the command line tool for Areca makes me\navoid their older cards. The 1680 has it's own ethernet wtih a web\ninterface with snmp that is independent of the OS. This means that\nwith something like a hung / panicked kernel, you can still check out\nthe RAID array and check rebuild status and other stuff. We get a\nhang about every 180 to 460 days with them where the raid driver in\nlinux hangs with the array going off-line. It's still there to the\nweb interface on its own NIC. Newer kernels seem to trigger the\nfailure far more often, once every 1 to 2 weeks, two months on the\noutside. The driver guy from Areca is supposed to be working on the\ndriver for linux, so we'll see if it gets fixed. It's pretty stable\non a RHEL 5.2 kernel, on anything after that I've tested, it'll hang\nevery week or two. So I run RHEL 5.latest with a 5.2 kernel and it\nworks pretty well. Note that this is a pretty heavily used machine\nwith enough access going through 12 drives to use about 30% IOwait,\n50% user, 10% sys at peak midday. Load factor 7 to 15. And they run\nreally ultra-smooth between these hangs. They come back up\nuncorrupted, every time, every plug pull test etc. Other than the\noccasional rare hang, they're perfect.\n",
"msg_date": "Tue, 2 Mar 2010 20:26:58 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Greg Smith wrote:\n> Yeb Havinga wrote:\n>> With 24 drives it'll probably be the controller that is the limiting \n>> factor of bandwidth. Our HP SAN controller with 28 15K drives \n>> delivers 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. \n>\n> You should be able to clear 1GB/s on sequential reads with 28 15K \n> drives in a RAID10, given proper read-ahead adjustment. I get over \n> 200MB/s out of the 3-disk RAID0 on my home server without even trying \n> hard. Can you share what HP SAN controller you're using?\nYeah I should have mentioned a bit more, to allow for a better picture \nof the apples and pears.\n\nController a is the built in controller of the HP MSA1000 SAN - with 14 \ndisks but with extra 14 disks from a MSA30. It is connected through a \n2Gbit/s fibrechannel adapter - should give up to roughly 250MB/s \nbandwidth, maybe a bit less due to frame overhead and gib/gb difference.\nController has 256MB cache\n\nIt is three years old, however HP still sells it.\n\nI performed a few dozen tests with oracle's free and standalone orion \ntool (http://www.oracle.com/technology/software/tech/orion/index.html) \nwith different raid and controller settings, where I varied\n- controller read/write cache ratio\n- logical unit layout (like one big raidx, 3 luns with raid10 (giving \nstripe width of 4 disks and 4 hot spares), 7 luns with raid10\n- stripe size set to maximum\n- load type (random or sequential large io)\n- linux io scheduler (deadline / cfq etc)\n- fibre channel adapter queue depth\n- ratio between reads and writes by the orion - our production \napplication has about 25% writes.\n- I also did the short stroking that is talked about further in this \nthread by only using one partition of about 30% size on each disk.\n- etc\n\nMy primary goal was large IOPS for our typical load: mostly OLTP.\n\nThe orion tool tests in a matrix with on one axis the # concurrent small \nio's and the other axis the # concurrent large io's. It output numbers \nare also in a matrix, with MBps, iops and latency.\n\n I put several of these numbers in matlab to produce 3d pictures and \nthat showed some interesting stuff - its probably bad netiquette here to \npost a one of those pictures. One of the striking things was that I \ncould see something that looked like a mountain where the top was \nneatly cut of - my guess: controller maximum reached.\n\nBelow is the output data of a recent test, where a 4Gbit/s fc adapter \nwas connected. From this numbers I conclude that in our setup, the \ncontroller is maxed out at 155MB/s for raid 1+0 *with this setup*. In a \ntest run I constructed to try and see what the maximum mbps of the \ncontroller would be: 100% reads, sequential large io - that went to 170MBps.\n\nI'm particularly proud of the iops of this test. Please note: large load \nis random, not sequential!\n\nSo to come back at my original claim: controller is important when you \nhave 24 disks. I believe I have backed up this claim by this mail. Also \nplease take notice that for our setup, a database that has a lot of \nconcurrent users on a medium size database (~=160GB), random IO is what \nwe needed, and for this purpose the HP MSA has proved rock solid. But \nthe setup that Francisco mentioned is different: a few users doing \nmostly sequential IO. For that load, our setup is far from optimal, \nmainly because of the (single) controller.\n\nregards,\nYeb Havinga\n\n\nORION VERSION 10.2.0.1.0\n\nCommandline:\n-run advanced -testname r10-7 -num_disks 24 -size_small 4 -size_large \n1024 -type rand -simulate concat -verbose -write 25 -duration 15 -matrix \ndetailed -cache_size 256\n\nThis maps to this test:\nTest: r10-7\nSmall IO size: 4 KB\nLarge IO size: 1024 KB\nIO Types: Small Random IOs, Large Random IOs\nSimulated Array Type: CONCAT\nWrite: 25%\nCache Size: 256 MB\nDuration for each Data Point: 15 seconds\nSmall Columns:, 0, 1, 2, 3, 4, 5, \n6, 12, 18, 24, 30, 36, 42, 48, 54, \n60, 66, 72, 78, 84, 90, 96, 102, 108, \n114, 120\nLarge Columns:, 0, 1, 2, 3, 4, 8, \n12, 16, 20, 24, 28, 32, 36, 40, 44, 48\nTotal Data Points: 416\n\nName: /dev/sda1 Size: 72834822144\nName: /dev/sdb1 Size: 72834822144\nName: /dev/sdc1 Size: 72834822144\nName: /dev/sdd1 Size: 72834822144\nName: /dev/sde1 Size: 72834822144\nName: /dev/sdf1 Size: 72834822144\nName: /dev/sdg1 Size: 72834822144\n7 FILEs found.\n\nMaximum Large MBPS=155.05 @ Small=2 and Large=48\nMaximum Small IOPS=6261 @ Small=120 and Large=0\nMinimum Small Latency=3.93 @ Small=1 and Large=0\n\nBelow the MBps matrix - hope this reads well in email clients??\n\nLarge/Small, 0, 1, 2, 3, 4, 5, 6, \n12, 18, 24, 30, 36, 42, 48, 54, 60, \n66, 72, 78, 84, 90, 96, 102, 108, 114, 120\n 1, 76.60, 74.87, 73.24, 70.66, 70.45, 68.36, 67.58, \n59.63, 54.94, 50.74, 44.65, 41.24, 37.31, 35.85, 35.05, 32.53, \n29.01, 30.64, 30.39, 27.41, 26.19, 25.43, 24.17, 24.10, 22.96, \n22.39\n 2, 114.19, 115.65, 113.65, 112.11, 111.31, 109.77, 108.57, \n101.81, 95.25, 86.74, 83.48, 76.12, 70.82, 68.98, 62.85, 63.75, \n57.36, 56.28, 52.78, 50.37, 47.96, 48.53, 46.82, 44.47, 45.09, \n42.53\n 3, 135.41, 135.21, 134.20, 134.27, 133.78, 132.62, 131.03, \n127.08, 121.25, 114.15, 109.51, 104.28, 98.66, 94.91, 91.95, 86.27, \n82.99, 79.28, 76.09, 74.26, 71.60, 67.83, 67.94, 64.55, 65.39, \n63.23\n 4, 144.30, 143.93, 145.00, 144.47, 143.49, 142.56, 142.23, \n139.14, 135.64, 131.82, 128.82, 124.51, 121.88, 116.16, 112.13, 107.91, \n105.63, 101.54, 99.06, 93.50, 90.35, 87.25, 86.98, 83.57, 83.45, \n79.73\n 8, 152.93, 152.87, 152.60, 152.29, 152.36, 152.16, 151.85, \n151.11, 150.00, 149.09, 148.18, 147.40, 146.09, 145.21, 144.94, 143.82, \n142.90, 141.43, 140.93, 140.08, 137.83, 136.95, 136.17, 133.69, 134.05, \n131.85\n 12, 154.10, 153.83, 154.07, 153.79, 154.03, 153.35, 153.09, \n152.41, 152.14, 151.32, 151.49, 150.68, 150.10, 149.69, 149.19, 148.07, \n148.00, 147.90, 146.78, 146.57, 145.79, 144.96, 145.21, 144.23, 143.58, \n142.59\n 16, 154.30, 154.40, 153.71, 153.96, 154.13, 154.13, 153.58, \n153.24, 152.97, 152.86, 152.29, 151.95, 151.57, 150.68, 150.85, 150.44, \n150.03, 149.59, 149.15, 149.01, 148.29, 147.89, 147.44, 147.41, 146.79, \n146.55\n 20, 154.70, 154.53, 154.33, 154.12, 154.05, 154.29, 154.05, \n153.84, 152.87, 153.26, 153.02, 152.64, 152.37, 151.99, 151.65, 151.44, \n150.89, 150.89, 150.69, 150.34, 149.90, 149.59, 149.38, 149.31, 148.76, \n148.35\n 24, 154.31, 154.34, 154.28, 154.31, 154.21, 154.39, 154.07, \n153.80, 153.80, 153.17, 153.28, 152.83, 152.59, 152.66, 151.97, 152.00, \n151.66, 151.17, 150.79, 151.10, 150.62, 150.52, 150.17, 149.93, 149.79, \n149.27\n 28, 154.62, 154.48, 154.34, 154.70, 154.48, 154.31, 154.44, \n153.92, 153.82, 153.72, 153.54, 153.23, 152.88, 152.29, 152.23, 152.43, \n151.84, 151.70, 151.32, 151.56, 150.87, 150.87, 150.90, 150.31, 150.63, \n150.03\n 32, 154.58, 154.33, 154.90, 154.40, 154.51, 154.44, 154.41, \n154.08, 154.30, 154.02, 153.53, 153.50, 153.35, 153.01, 152.83, 152.83, \n152.41, 152.16, 152.06, 151.99, 151.75, 151.29, 151.12, 151.47, 151.22, \n150.77\n 36, 154.67, 154.46, 154.43, 154.25, 154.60, 154.96, 154.25, \n154.25, 154.15, 154.00, 153.83, 153.45, 153.16, 153.23, 152.74, 152.66, \n152.49, 152.57, 152.28, 152.53, 151.79, 151.40, 151.23, 151.30, 151.19, \n151.20\n 40, 154.27, 154.67, 154.63, 154.74, 154.17, 154.31, 154.82, \n154.24, 154.67, 154.35, 153.81, 153.82, 153.89, 153.29, 153.18, 152.97, \n153.18, 152.72, 152.69, 151.94, 151.80, 151.69, 152.12, 151.59, 151.31, \n151.52\n 44, 154.37, 154.59, 154.51, 154.66, 154.88, 154.58, 154.26, \n154.29, 153.83, 154.38, 153.84, 153.66, 153.55, 153.23, 153.02, 153.20, \n152.70, 152.67, 152.88, 152.53, 152.67, 152.13, 152.10, 152.06, 151.53, \n151.45\n 48, 154.61, 154.83, 155.05, 154.65, 154.47, 154.97, 154.29, \n154.40, 154.33, 154.29, 154.00, 154.01, 153.71, 153.47, 153.58, 153.50, \n153.15, 152.50, 153.08, 152.83, 152.40, 152.04, 151.46, 152.29, 152.11, \n151.43\n\nbelow the iops matrix\n\nLarge/Small, 1, 2, 3, 4, 5, 6, 12, \n18, 24, 30, 36, 42, 48, 54, 60, 66, \n72, 78, 84, 90, 96, 102, 108, 114, 120\n 0, 254, 502, 751, 960, 1177, 1388, 2343, \n3047, 3557, 3945, 4247, 4529, 4752, 4953, 5111, 5280, \n5412, 5550, 5670, 5785, 5904, 5987, 6093, 6167, 6261\n 1, 178, 353, 526, 684, 832, 999, 1801, \n2445, 2937, 3382, 3742, 4054, 4262, 4489, 4685, 4910, \n5030, 5139, 5312, 5439, 5549, 5685, 5760, 5873, 5953\n 2, 122, 240, 364, 484, 605, 715, 1342, \n1907, 2416, 2808, 3208, 3526, 3789, 4072, 4217, 4477, \n4629, 4840, 4964, 5187, 5242, 5381, 5490, 5543, 5704\n 3, 84, 167, 253, 337, 420, 510, 990, \n1486, 1924, 2332, 2692, 3035, 3272, 3578, 3838, 4048, \n4260, 4426, 4607, 4760, 4948, 4989, 5164, 5216, 5335\n 4, 61, 120, 180, 236, 303, 368, 732, \n1086, 1445, 1780, 2144, 2434, 2771, 3092, 3342, 3576, \n3793, 4000, 4165, 4376, 4554, 4703, 4805, 4847, 5062\n 8, 24, 49, 73, 100, 122, 152, 303, \n448, 614, 759, 889, 1043, 1201, 1325, 1489, 1647, \n1800, 1948, 2116, 2291, 2434, 2594, 2824, 2946, 3124\n 12, 15, 30, 45, 62, 76, 90, 188, \n275, 366, 462, 543, 638, 726, 814, 906, 978, \n1055, 1151, 1245, 1341, 1425, 1488, 1566, 1688, 1759\n 16, 10, 23, 32, 44, 55, 66, 130, \n198, 259, 328, 387, 450, 519, 580, 643, 706, \n767, 834, 891, 964, 1029, 1083, 1141, 1206, 1263\n 20, 8, 17, 25, 34, 41, 50, 102, \n152, 201, 255, 302, 350, 402, 447, 496, 554, \n591, 645, 688, 746, 791, 844, 882, 934, 984\n 24, 6, 13, 21, 28, 35, 41, 85, \n123, 166, 206, 250, 288, 326, 377, 410, 451, \n497, 531, 568, 610, 660, 694, 732, 772, 814\n 28, 6, 12, 17, 23, 29, 35, 70, \n106, 142, 174, 210, 247, 279, 325, 348, 378, \n419, 453, 487, 523, 553, 586, 627, 651, 691\n 32, 5, 10, 15, 20, 26, 31, 61, \n92, 120, 154, 182, 212, 245, 274, 309, 336, \n368, 395, 429, 452, 488, 514, 542, 581, 605\n 36, 4, 9, 13, 18, 22, 27, 56, \n83, 110, 138, 166, 193, 222, 248, 279, 302, \n333, 358, 385, 414, 438, 468, 496, 523, 551\n 40, 4, 8, 12, 17, 21, 25, 50, \n77, 103, 127, 155, 184, 205, 236, 256, 285, \n315, 341, 362, 387, 418, 442, 468, 492, 518\n 44, 4, 8, 11, 15, 20, 24, 49, \n73, 98, 123, 151, 173, 197, 225, 248, 269, \n294, 329, 349, 373, 390, 428, 438, 469, 498\n 48, 3, 7, 11, 15, 20, 23, 47, \n70, 95, 120, 141, 166, 192, 212, 237, 260, \n282, 308, 329, 353, 378, 400, 424, 450, 468\n\n\n\n",
"msg_date": "Wed, 03 Mar 2010 10:05:40 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Tue, Mar 2, 2010 at 1:51 PM, Yeb Havinga <[email protected]> wrote:\n> \n>> With 24 drives it'll probably be the controller that is the limiting factor\n>> of bandwidth. Our HP SAN controller with 28 15K drives delivers 170MB/s at\n>> maximum with raid 0 and about 155MB/s with raid 1+0. So I'd go for the 10K\n>> drives and put the saved money towards the controller (or maybe more than\n>> one controller).\n>> \n>\n> That's horrifically bad numbers for that many drives. I can get those\n> numbers for write performance on a RAID-6 on our office server. I\n> wonder what's making your SAN setup so slow?\n> \nPre scriptum:\nA few minutes ago I mailed detailed information in the same thread but \nas reply to an earlier response - it tells more about setup and gives \nresults of a raid1+0 test.\n\nI just have to react to \"horrifically bad\" and \"slow\" :-) : The HP san \ncan do raid5 on 28 disks also on about 155MBps:\n\n28 disks devided into 7 logical units with raid5, orion results are \nbelow. Please note that this time I did sequential large io. The mixed \nread/write MBps maximum here is comparable: around 155MBps.\n\nregards,\nYeb Havinga\n\n\nORION VERSION 10.2.0.1.0\n\nCommandline:\n-run advanced -testname msa -num_disks 24 -size_small 4 -size_large 1024 \n-type seq -simulate concat -verbose -write 50 -duration 15 -matrix \ndetailed -cache_size 256\n\nThis maps to this test:\nTest: msa\nSmall IO size: 4 KB\nLarge IO size: 1024 KB\nIO Types: Small Random IOs, Large Sequential Streams\nNumber of Concurrent IOs Per Stream: 4\nForce streams to separate disks: No\nSimulated Array Type: CONCAT\nWrite: 50%\nCache Size: 256 MB\nDuration for each Data Point: 15 seconds\nSmall Columns:, 0, 1, 2, 3, 4, 5, \n6, 12, 18, 24, 30, 36, 42, 48, 54, \n60, 66, 72, 78, 84, 90, 96, 102, 108, \n114, 120\nLarge Columns:, 0, 1, 2, 3, 4, 8, \n12, 16, 20, 24, 28, 32, 36, 40, 44, 48\nTotal Data Points: 416\n\nName: /dev/sda1 Size: 109256361984\nName: /dev/sdb1 Size: 109256361984\nName: /dev/sdc1 Size: 109256361984\nName: /dev/sdd1 Size: 109256361984\nName: /dev/sde1 Size: 109256361984\nName: /dev/sdf1 Size: 109256361984\nName: /dev/sdg1 Size: 109256361984\n7 FILEs found.\n\nMaximum Large MBPS=157.75 @ Small=0 and Large=1\nMaximum Small IOPS=3595 @ Small=66 and Large=1\nMinimum Small Latency=2.81 @ Small=1 and Large=0\n\nMBPS matrix\n\nLarge/Small, 0, 1, 2, 3, 4, 5, 6, \n12, 18, 24, 30, 36, 42, 48, 54, 60, \n66, 72, 78, 84, 90, 96, 102, 108, 114, 120\n 1, 157.75, 156.47, 153.56, 153.45, 144.87, 141.78, 140.60, \n112.45, 95.23, 72.80, 80.59, 36.91, 29.76, 42.86, 41.82, 33.87, \n34.07, 45.62, 42.97, 26.37, 42.85, 45.49, 44.47, 37.26, 45.67, \n36.18\n 2, 137.58, 128.48, 125.78, 133.85, 120.12, 127.86, 127.05, \n119.26, 121.23, 115.00, 117.88, 114.35, 108.61, 106.55, 83.50, 78.61, \n92.67, 96.01, 44.02, 70.60, 62.84, 46.52, 69.18, 51.84, 57.19, \n59.62\n 3, 143.10, 134.92, 139.30, 138.62, 137.85, 146.17, 140.41, \n138.48, 76.00, 138.17, 123.48, 137.45, 126.51, 137.11, 91.94, 90.33, \n129.97, 45.35, 115.92, 89.60, 137.22, 72.46, 89.95, 77.40, 119.17, \n82.09\n 4, 138.47, 133.74, 129.99, 122.33, 126.75, 125.22, 132.30, \n120.41, 125.88, 132.21, 96.92, 115.70, 131.65, 66.34, 114.06, 113.62, \n116.91, 96.97, 98.69, 127.16, 116.67, 111.53, 128.97, 92.38, 118.14, \n78.31\n 8, 126.59, 127.92, 115.51, 125.02, 123.29, 111.94, 124.31, \n125.71, 134.48, 126.40, 127.93, 125.36, 121.75, 121.75, 127.17, 116.51, \n121.44, 121.12, 112.32, 121.55, 127.93, 124.86, 118.04, 114.59, 121.72, \n114.79\n 12, 112.40, 122.58, 107.61, 125.42, 128.04, 123.80, 127.17, \n127.70, 122.37, 96.52, 115.36, 124.49, 124.07, 129.31, 124.62, 124.23, \n105.58, 123.55, 115.67, 120.59, 125.61, 123.57, 121.43, 121.45, 121.44, \n113.64\n 16, 108.88, 119.79, 123.80, 120.55, 120.02, 121.66, 125.71, \n122.19, 125.77, 122.27, 119.55, 118.44, 120.51, 104.66, 97.55, 115.43, \n101.45, 108.99, 122.30, 100.45, 105.82, 119.56, 121.26, 126.59, 119.54, \n115.09\n 20, 103.88, 122.95, 115.86, 114.59, 121.13, 108.52, 116.90, \n121.10, 113.91, 108.20, 111.51, 125.64, 117.57, 120.86, 117.66, 100.40, \n104.88, 103.15, 98.10, 104.86, 104.69, 102.99, 121.81, 107.22, 122.68, \n106.43\n 24, 102.64, 102.33, 112.95, 110.63, 108.00, 111.53, 124.33, \n103.17, 108.16, 112.63, 97.42, 106.22, 102.54, 117.46, 100.66, 99.01, \n104.46, 99.02, 116.02, 112.49, 119.05, 104.03, 102.40, 102.44, 111.15, \n99.51\n 28, 101.12, 102.76, 114.14, 109.72, 120.63, 118.09, 119.85, \n113.80, 116.58, 110.24, 101.45, 110.31, 116.06, 112.04, 121.63, 91.26, \n98.88, 101.55, 104.51, 116.43, 112.98, 119.46, 120.08, 109.46, 106.29, \n96.69\n 32, 103.41, 117.33, 101.33, 102.29, 102.58, 116.18, 107.12, \n114.63, 121.84, 95.14, 108.83, 99.82, 103.11, 99.36, 117.80, 94.91, \n103.46, 103.97, 117.35, 100.51, 100.18, 101.98, 118.26, 115.03, 100.45, \n107.90\n 36, 99.90, 97.98, 100.94, 95.56, 118.76, 99.05, 114.02, \n93.61, 117.68, 115.22, 114.40, 116.38, 100.38, 99.15, 108.66, 101.67, \n106.64, 98.69, 111.99, 108.28, 99.62, 112.67, 118.80, 110.40, 118.86, \n108.46\n 40, 101.51, 103.38, 93.73, 121.69, 106.27, 104.09, 110.81, \n105.83, 95.81, 101.47, 105.96, 113.26, 103.61, 114.26, 100.49, 102.35, \n111.44, 95.09, 103.02, 106.21, 104.39, 118.31, 96.73, 109.79, 103.71, \n99.70\n 44, 101.17, 107.22, 107.50, 115.19, 104.16, 108.93, 101.62, \n111.82, 110.66, 104.13, 109.68, 103.20, 92.04, 104.70, 102.30, 117.28, \n106.37, 100.42, 107.81, 105.31, 110.21, 108.66, 116.05, 105.55, 100.64, \n106.67\n 48, 101.00, 104.13, 114.00, 99.55, 107.46, 113.29, 114.32, \n108.75, 100.11, 99.89, 104.81, 107.36, 102.93, 106.43, 101.98, 103.15, \n101.30, 113.94, 103.07, 102.40, 95.38, 111.33, 93.89, 112.30, 103.58, \n101.82\n\niops matrix\n\nLarge/Small, 1, 2, 3, 4, 5, 6, 12, \n18, 24, 30, 36, 42, 48, 54, 60, 66, \n72, 78, 84, 90, 96, 102, 108, 114, 120\n 0, 355, 639, 875, 1063, 1230, 1366, 1933, \n2297, 2571, 2750, 2981, 3394, 3027, 3045, 3036, 3139, \n3218, 3081, 3151, 3203, 3128, 3179, 3093, 3141, 3135\n 1, 37, 99, 144, 298, 398, 488, 1183, \n1637, 2069, 2268, 2613, 2729, 2860, 2983, 3119, 3595, \n3065, 3077, 3036, 3008, 3039, 3030, 3067, 3138, 3041\n 2, 22, 36, 44, 130, 112, 92, 271, \n378, 579, 673, 903, 1091, 1131, 1735, 1612, 1809, \n1236, 2316, 2302, 1410, 2467, 2526, 2692, 2606, 2625\n 3, 5, 13, 18, 21, 27, 27, 56, \n92, 162, 196, 209, 239, 309, 595, 1551, 611, \n2408, 1034, 488, 401, 1226, 1700, 2490, 1516, 2435\n 4, 8, 10, 33, 38, 53, 38, 137, \n191, 165, 502, 369, 212, 1127, 654, 1069, 721, \n643, 1046, 537, 803, 1093, 497, 1669, 1120, 1945\n 8, 3, 8, 6, 15, 24, 19, 61, \n47, 90, 139, 109, 174, 184, 154, 261, 294, \n289, 460, 338, 199, 425, 433, 633, 475, 599\n 12, 3, 7, 7, 10, 11, 12, 32, \n74, 67, 91, 93, 120, 157, 158, 201, 191, \n143, 220, 327, 217, 283, 276, 297, 336, 365\n 16, 2, 3, 6, 6, 10, 11, 27, \n35, 56, 52, 80, 100, 89, 118, 102, 140, \n178, 158, 174, 188, 185, 243, 168, 235, 249\n 20, 1, 3, 5, 6, 8, 11, 15, \n30, 30, 54, 44, 70, 76, 87, 104, 79, \n121, 115, 128, 135, 147, 158, 194, 184, 250\n 24, 1, 2, 4, 5, 8, 6, 14, \n21, 29, 36, 42, 50, 58, 61, 70, 64, \n85, 102, 117, 120, 111, 126, 129, 170, 159\n 28, 1, 2, 4, 3, 7, 7, 16, \n19, 23, 30, 37, 51, 60, 58, 59, 65, \n75, 76, 91, 83, 107, 103, 113, 120, 135\n 32, 1, 2, 3, 3, 6, 6, 12, \n17, 19, 28, 30, 31, 32, 44, 49, 57, \n53, 82, 87, 80, 84, 106, 106, 96, 93\n 36, 1, 2, 4, 5, 4, 7, 9, \n15, 22, 27, 30, 32, 35, 43, 46, 48, \n52, 67, 69, 54, 78, 87, 98, 92, 114\n 40, 0, 2, 2, 3, 4, 5, 12, \n12, 16, 24, 25, 29, 35, 36, 42, 51, \n45, 55, 60, 61, 71, 67, 72, 77, 67\n 44, 0, 2, 2, 3, 4, 4, 10, \n12, 19, 20, 24, 24, 25, 32, 34, 40, \n43, 58, 62, 60, 71, 75, 75, 68, 81\n 48, 0, 1, 2, 5, 4, 4, 10, \n14, 16, 18, 21, 23, 27, 31, 34, 37, \n44, 42, 54, 48, 59, 54, 69, 65, 67\n\n\n\n",
"msg_date": "Wed, 03 Mar 2010 10:41:21 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Francisco Reyes wrote:\n>\n> Going with a 3Ware SAS controller.\n>\n>\n> Have some external enclosures with 16 15Krpm drives. They are older \n> 15K rpms, but they should be good enough.\n> Since the 15K rpms usually have better Transanctions per second I will \n> put WAL and indexes in the external enclosure.\n\nIt sounds like you have a lot of hardware around - my guess it would be \nworthwhile to do a test setup with one server hooked up with two 3ware \ncontrollers. Also, I am not sure if it is wise to put the WAL on the \nsame logical disk as the indexes, but that is maybe for a different \nthread (unwise to mix random and sequential io and also the wal has \ndemands when it comes to write cache).\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Wed, 03 Mar 2010 10:45:13 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "\n>>> With 24 drives it'll probably be the controller that is the limiting \n>>> factor of bandwidth. Our HP SAN controller with 28 15K drives delivers \n>>> 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0.\n\nI get about 150-200 MB/s on .... a linux software RAID of 3 cheap Samsung \nSATA 1TB drives (which is my home multimedia server)...\nIOPS would be of course horrendous, that's RAID-5, but that's not the \npoint here.\n\nFor raw sequential throughput, dumb drives with dumb software raid can be \npretty fast, IF each drive has a dedicated channel (SATA ensures this) and \nthe controller is on a fast PCIexpress (in my case, chipset SATA \ncontroller).\n\nI don't suggest you use software RAID with cheap consumer drives, just \nthat any expensive setup that doesn't deliver MUCH more performance that \nis useful to you (ie in your case sequential IO) maybe isn't worth the \nextra price... There are many bottlenecks...\n",
"msg_date": "Wed, 03 Mar 2010 11:36:55 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Yeb Havinga writes:\n\n> controllers. Also, I am not sure if it is wise to put the WAL on the \n> same logical disk as the indexes,\n\nIf I only have two controllers would it then be better to put WAL on the \nfirst along with all the data and the indexes on the external? Specially \nsince the external enclosure will have 15K rpm vs 10K rpm in the internal.\n\n\n> thread (unwise to mix random and sequential io and also the wal has \n> demands when it comes to write cache).\n\nThanks for pointing that out.\nWith any luck I will actually be able to do some tests for the new hardware. \nThe curernt one I literaly did a few hours stress test and had to put in \nproduction right away.\n\n",
"msg_date": "Wed, 03 Mar 2010 08:51:56 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Who are you using for SAS?\n> One thing I like about 3ware is their management utility works under \n> both FreeBSD and Linux well.\n\n3ware has turned into a division within LSI now, so I have my doubts \nabout their long-term viability as a separate product as well.\n\nLSI used to be the reliable, open, but somewhat slower cards around, but \nthat doesn't seem to be the case with their SAS products anymore. I've \nworked on two systems using their MegaRAID SAS 1078 chipset in RAID10 \nrecently and been very impressed with both. That's what I'm \nrecommending to clients now too--especially people who liked Dell \nanyway. (HP customers are still getting pointed toward their \nP400/600/800. 3ware in white box systems, still OK, but only SATA. \nAreca is fast, but they're really not taking the whole driver thing \nseriously.)\n\nYou can get that direct from LSI as the MegaRAID SAS 8888ELP: \nhttp://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/megaraid_sas_8888elp/ \nas well as some similar models. And that's what Dell sells as their \nPERC6. Here's what a typical one looks like from Linux's perspective, \njust to confirm which card/chipset/driver I'm talking about:\n\n# lspci -v\n03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 \n(rev 04)\n Subsystem: Dell PERC 6/i Integrated RAID Controller\n\n$ /sbin/modinfo megaraid_sas\nfilename: \n/lib/modules/2.6.18-164.el5/kernel/drivers/scsi/megaraid/megaraid_sas.ko\ndescription: LSI MegaRAID SAS Driver\nauthor: [email protected]\nversion: 00.00.04.08-RH2\nlicense: GPL\n\nAs for the management utility, LSI ships \"MegaCli[64]\" as a statically \nlinked Linux binary. Plenty of reports of people running it on FreeBSD \nwith no problems via Linux emulation libraries--it's a really basic CLI \ntool and whatever interface it talks to card via seems to emulate just \nfine. UI is awful, but once you find the magic cheat sheet at \nhttp://tools.rapidsoft.de/perc/ it's not too bad. No direct SMART \nmonitoring here either, which is disappointing, but you can get some \npretty detailed data out of MegaCli so it's not terrible.\n\nI've seen >100MB/s per drive on reads out of small RAID10 arrays, and \ncleared 1GB/s on larger ones (all on RHEL5+ext3) with this controller on \nrecent installs.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 03 Mar 2010 09:00:32 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Francisco Reyes wrote:\n> Yeb Havinga writes:\n>\n>> controllers. Also, I am not sure if it is wise to put the WAL on the \n>> same logical disk as the indexes,\n>\n> If I only have two controllers would it then be better to put WAL on \n> the first along with all the data and the indexes on the external? \n> Specially since the external enclosure will have 15K rpm vs 10K rpm in \n> the internal. \nIt sounds like you're going to create a single logical unit / raid array \non each of the controllers. Depending on the number of disks, this is a \nbad idea because if you'd read/write data sequentially, all drive heads \nwill be aligned to roughly the same place ont the disks. If another \nprocess wants to read/write as well, this will interfere and bring down \nboth iops and mbps. However, with three concurrent users.. hmm but then \nagain, queries will scan multiple tables/indexes so there will be mixed \nio to several locations. What would be interesting it so see what the \nmbps maximum of a single controller is. Then calculate how much disks \nare needed to feed that, which would give a figure for number of disks \nper logical unit.\n\nThe challenge with having a few logical units / raid arrays available, \nis how to divide the data over it (with tablespaces) What is good for \nyour physical data depends on the schema and queries that are most \nimportant. For 2 relations and 2 indexes and 4 arrays, it would be \nclear. There's not much to say anything general here, except: do not mix \ntable or index data with the wal. In other words: if you could make a \nseparate raid array for the wal (2 disk raid1 probably good enough), \nthat would be ok and doesn't matter on which controller or enclosure it \nhappens, because io to disk is not mixed with the data io.\n>\n> Thanks for pointing that out.\n> With any luck I will actually be able to do some tests for the new \n> hardware. The curernt one I literaly did a few hours stress test and \n> had to put in production right away.\nI've heard that before ;-) If you do get around to do some tests, I'm \ninterested in the results / hard numbers.\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Wed, 03 Mar 2010 15:24:28 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Wed, Mar 3, 2010 at 4:53 AM, Hannu Krosing <[email protected]> wrote:\n> On Wed, 2010-03-03 at 10:41 +0100, Yeb Havinga wrote:\n>> Scott Marlowe wrote:\n>> > On Tue, Mar 2, 2010 at 1:51 PM, Yeb Havinga <[email protected]> wrote:\n>> >\n>> >> With 24 drives it'll probably be the controller that is the limiting factor\n>> >> of bandwidth. Our HP SAN controller with 28 15K drives delivers 170MB/s at\n>> >> maximum with raid 0 and about 155MB/s with raid 1+0. So I'd go for the 10K\n>> >> drives and put the saved money towards the controller (or maybe more than\n>> >> one controller).\n>> >>\n>> >\n>> > That's horrifically bad numbers for that many drives. I can get those\n>> > numbers for write performance on a RAID-6 on our office server. I\n>> > wonder what's making your SAN setup so slow?\n>> >\n>> Pre scriptum:\n>> A few minutes ago I mailed detailed information in the same thread but\n>> as reply to an earlier response - it tells more about setup and gives\n>> results of a raid1+0 test.\n>>\n>> I just have to react to \"horrifically bad\" and \"slow\" :-) : The HP san\n>> can do raid5 on 28 disks also on about 155MBps:\n>\n> SAN-s are \"horrifically bad\" and \"slow\" mainly because of the 2MBit sec\n> fiber channel.\n> But older ones may be just slow internally as well.\n> The fact that it is expensive does not make it fast.\n> If you need fast thrughput, use direct attached storage\n\nLet me be clear that the only number mentioned at the beginning was\nthroughput. If you're designing a machine to run huge queries and\nreturn huge amounts of data that matters. OLAP. If you're designing\nfor OLTP you're likely to only have a few megs a second passing\nthrough but in thousands of xactions per second. So, when presented\nwith the only metric of throughput, I figured that's what the OP was\ndesigning for. For OLTP his SAN is plenty fast.\n",
"msg_date": "Wed, 3 Mar 2010 11:32:50 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "\nOn Mar 2, 2010, at 1:36 PM, Francisco Reyes wrote:\n\n> [email protected] writes:\n> \n>> With sequential scans you may be better off with the large SATA drives as \n>> they fit more data per track and so give great sequential read rates.\n> \n> I lean more towards SAS because of writes.\n> One common thing we do is create temp tables.. so a typical pass may be:\n> * sequential scan\n> * create temp table with subset\n> * do queries against subset+join to smaller tables.\n> \n> I figure the concurrent read/write would be faster on SAS than on SATA. I am \n> trying to move to having an external enclosure (we have several not in use \n> or about to become free) so I could separate the read and the write of the \n> temp tables.\n> \n\nConcurrent Read/Write performance has far more to do with OS and Filesystem choice and tuning than what type of drive it is.\n\n> Lastly, it is likely we are going to do horizontal partitioning (ie master \n> all data in one machine, replicate and then change our code to read parts of \n> data from different machine) and I think at that time the better drives will \n> do better as we have more concurrent queries.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 8 Mar 2010 15:50:24 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "\nOn Mar 2, 2010, at 2:10 PM, <[email protected]> wrote:\n\n> On Tue, 2 Mar 2010, Scott Marlowe wrote:\n> \n>> On Tue, Mar 2, 2010 at 2:30 PM, Francisco Reyes <[email protected]> wrote:\n>>> Scott Marlowe writes:\n>>> \n>>>> Then the real thing to compare is the speed of the drives for\n>>>> throughput not rpm.\n>>> \n>>> In a machine, simmilar to what I plan to buy, already in house 24 x 10K rpm\n>>> gives me about 400MB/sec while 16 x 15K rpm (2 to 3 year old drives) gives\n>>> me about 500MB/sec\n>> \n>> Have you tried short stroking the drives to see how they compare then?\n>> Or is the reduced primary storage not a valid path here?\n>> \n>> While 16x15k older drives doing 500Meg seems only a little slow, the\n>> 24x10k drives getting only 400MB/s seems way slow. I'd expect a\n>> RAID-10 of those to read at somewhere in or just past the gig per\n>> second range with a fast pcie (x8 or x16 or so) controller. You may\n>> find that a faster controller with only 8 or so fast and large SATA\n>> drives equals the 24 10k drives you're looking at now. I can write at\n>> about 300 to 350 Megs a second on a slower Areca 12xx series\n>> controller and 8 2TB Western Digital Green drives, which aren't even\n>> made for speed.\n> \n> what filesystem is being used. There is a thread on the linux-kernel \n> mailing list right now showing that ext4 seems to top out at ~360MB/sec \n> while XFS is able to go to 500MB/sec+\n\nI have Centos 5.4 with 10 7200RPM 1TB SAS drives in RAID 10 (Seagate ES.2, same perf as the SATA ones), XFS, Adaptec 5805, and get ~750MB/sec read and write sequential throughput.\n\nA RAID 0 of two of these stops around 1000MB/sec because it is CPU bound in postgres -- for select count(*). If it is select * piped to /dev/null, it is CPU bound below 300MB/sec converting data to text.\n\nFor xfs, set readahead to 16MB or so (2MB or so per stripe) (--setra 32768 is 16MB) and absolutely make sure that the xfs mount parameter 'allocsize' is set to about the same size or more. For large sequential operations, you want to make sure interleaved writes don't interleave files on disk. I use 80MB allocsize, and 40MB readahead for the reporting data.\n\nLater Linux kernels have significantly improved readahead systems that don't need to be tuned quite as much. For high sequential throughput, nothing is as optimized as XFS on Linux yet. It has weaknesses elsewhere however.\n\nAnd 3Ware on Linux + high throughput sequential = slow. PERC 6 was 20% faster, and Adaptec was 70% faster with the same drives, and with experiments to filesystem and readahead for all. From what I hear, Areca is a significant notch above Adaptec on that too.\n\n> \n> on single disks the disk performance limits you, but on arrays where the \n> disk performance is higher there may be other limits you are running into.\n> \n> David Lang\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 8 Mar 2010 16:01:21 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Carey wrote:\n> For high sequential throughput, nothing is as optimized as XFS on Linux yet. It has weaknesses elsewhere however.\n> \n\nI'm curious what you feel those weaknesses are. The recent addition of \nXFS back into a more mainstream position in the RHEL kernel as of their \n5.4 update greatly expands where I can use it now, have been heavily \nrevisiting it since that release. I've already noted how well it does \non sequential read/write tasks relative to ext3, and it looks like the \nmain downsides I used to worry about with it (mainly crash recovery \nissues) were also squashed in recent years.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 09 Mar 2010 02:00:50 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, 09 Mar 2010 08:00:50 +0100, Greg Smith <[email protected]> \nwrote:\n\n> Scott Carey wrote:\n>> For high sequential throughput, nothing is as optimized as XFS on Linux \n>> yet. It has weaknesses elsewhere however.\n>>\n\nWhen files are extended one page at a time (as postgres does) \nfragmentation can be pretty high on some filesystems (ext3, but NTFS is \nthe absolute worst) if several files (indexes + table) grow \nsimultaneously. XFS has delayed allocation which really helps.\n\n> I'm curious what you feel those weaknesses are.\n\nHandling lots of small files, especially deleting them, is really slow on \nXFS.\nDatabases don't care about that.\n\nThere is also the dark side of delayed allocation : if your application is \nbroken, it will manifest itself very painfully. Since XFS keeps a lot of \nunwritten stuff in the buffers, an app that doesn't fsync correctly can \nlose lots of data if you don't have a UPS.\n\nFortunately, postgres handles fsync like it should be.\n\nA word of advice though : a few years ago, we lost a few terabytes on XFS \n(after that, restoring from backup was quite slow !) because a faulty SCSI \ncable crashed the server, then crashed it again during xfsrepair. So if \nyou do xfsrepair on a suspicious system, please image the disks first.\n",
"msg_date": "Tue, 09 Mar 2010 14:39:22 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Pierre C escribió:\n> On Tue, 09 Mar 2010 08:00:50 +0100, Greg Smith <[email protected]> \n> wrote:\n>\n>> Scott Carey wrote:\n>>> For high sequential throughput, nothing is as optimized as XFS on \n>>> Linux yet. It has weaknesses elsewhere however.\n>>>\n>\n> When files are extended one page at a time (as postgres does) \n> fragmentation can be pretty high on some filesystems (ext3, but NTFS \n> is the absolute worst) if several files (indexes + table) grow \n> simultaneously. XFS has delayed allocation which really helps.\n>\n>> I'm curious what you feel those weaknesses are.\n>\n> Handling lots of small files, especially deleting them, is really slow \n> on XFS.\n> Databases don't care about that.\n>\n> There is also the dark side of delayed allocation : if your \n> application is broken, it will manifest itself very painfully. Since \n> XFS keeps a lot of unwritten stuff in the buffers, an app that doesn't \n> fsync correctly can lose lots of data if you don't have a UPS.\n>\n> Fortunately, postgres handles fsync like it should be.\n>\n> A word of advice though : a few years ago, we lost a few terabytes on \n> XFS (after that, restoring from backup was quite slow !) because a \n> faulty SCSI cable crashed the server, then crashed it again during \n> xfsrepair. So if you do xfsrepair on a suspicious system, please image \n> the disks first.\n>\nAnd then Which file system do you recommend for the PostgreSQL data \ndirectory? I was seeying that ZFS brings very cool features for that. \nThe problem with ZFS is that this file system is only on Solaris, \nOpenSolaris, FreeBSD and Mac OSX Server, and on Linux systems not What \ndo you think about that?\nRegards\n\n-- \n-------------------------------------------------------- \n-- Ing. Marcos Luís Ortíz Valmaseda --\n-- Twitter: http://twitter.com/@marcosluis2186 --\n-- FreeBSD Fan/User --\n-- http://www.freebsd.org/es --\n-- Linux User # 418229 --\n-- Database Architect/Administrator --\n-- PostgreSQL RDBMS --\n-- http://www.postgresql.org --\n-- http://planetpostgresql.org --\n-- http://www.postgresql-es.org --\n--------------------------------------------------------\n-- Data WareHouse -- Business Intelligence Apprentice --\n-- http://www.tdwi.org --\n-------------------------------------------------------- \n-- Ruby on Rails Fan/Developer --\n-- http://rubyonrails.org --\n--------------------------------------------------------\n\nComunidad Técnica Cubana de PostgreSQL\nhttp://postgresql.uci.cu\nhttp://personas.grm.uci.cu/+marcos \n\nCentro de Tecnologías de Gestión de Datos (DATEC) \nContacto: \n Correo: [email protected] \n Telf: +53 07-837-3737 \n +53 07-837-3714 \nUniversidad de las Ciencias Informáticas \nhttp://www.uci.cu \n\n\n\n",
"msg_date": "Tue, 09 Mar 2010 09:34:11 -0500",
"msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Do keep the postgres xlog on a seperate ext2 partition for best \nperformance. Other than that, xfs is definitely a good performer.\n\nMike Stone\n",
"msg_date": "Tue, 09 Mar 2010 09:47:12 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "On Tue, 9 Mar 2010, Pierre C wrote:\n\n> On Tue, 09 Mar 2010 08:00:50 +0100, Greg Smith <[email protected]> wrote:\n>\n>> Scott Carey wrote:\n>>> For high sequential throughput, nothing is as optimized as XFS on Linux \n>>> yet. It has weaknesses elsewhere however.\n>>> \n>\n> When files are extended one page at a time (as postgres does) fragmentation \n> can be pretty high on some filesystems (ext3, but NTFS is the absolute worst) \n> if several files (indexes + table) grow simultaneously. XFS has delayed \n> allocation which really helps.\n>\n>> I'm curious what you feel those weaknesses are.\n>\n> Handling lots of small files, especially deleting them, is really slow on \n> XFS.\n> Databases don't care about that.\n\naccessing lots of small files works really well on XFS compared to ext* (I \nuse XFS with a cyrus mail server which keeps each message as a seperate \nfile and XFS vastly outperforms ext2/3 there). deleting is slow as you say\n\nDavid Lang\n\n> There is also the dark side of delayed allocation : if your application is \n> broken, it will manifest itself very painfully. Since XFS keeps a lot of \n> unwritten stuff in the buffers, an app that doesn't fsync correctly can lose \n> lots of data if you don't have a UPS.\n>\n> Fortunately, postgres handles fsync like it should be.\n>\n> A word of advice though : a few years ago, we lost a few terabytes on XFS \n> (after that, restoring from backup was quite slow !) because a faulty SCSI \n> cable crashed the server, then crashed it again during xfsrepair. So if you \n> do xfsrepair on a suspicious system, please image the disks first.\n",
"msg_date": "Tue, 9 Mar 2010 06:49:10 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "\"Pierre C\" <[email protected]> wrote:\n> Greg Smith <[email protected]> wrote:\n \n>> I'm curious what you feel those weaknesses are.\n> \n> Handling lots of small files, especially deleting them, is really\n> slow on XFS.\n> Databases don't care about that.\n \nI know of at least one exception to that -- when we upgraded and got\na newer version of the kernel where XFS has write barriers on by\ndefault, some database transactions which were creating and dropping\ntemporary tables in a loop became orders of magnitude slower. Now,\nthat was a silly approach to getting the data that was needed and I\nhelped them rework the transactions, but something which had worked\nacceptably suddenly didn't anymore.\n \nSince we have a BBU hardware RAID controller, we can turn off write\nbarriers safely, at least according to this page:\n \nhttp://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F\n \nThis reduces the penalty for creating and deleting lots of small\nfiles.\n \n-Kevin\n",
"msg_date": "Tue, 09 Mar 2010 08:50:28 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "\nOn Mar 8, 2010, at 11:00 PM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> For high sequential throughput, nothing is as optimized as XFS on Linux yet. It has weaknesses elsewhere however.\n>> \n> \n> I'm curious what you feel those weaknesses are. The recent addition of \n> XFS back into a more mainstream position in the RHEL kernel as of their \n> 5.4 update greatly expands where I can use it now, have been heavily \n> revisiting it since that release. I've already noted how well it does \n> on sequential read/write tasks relative to ext3, and it looks like the \n> main downsides I used to worry about with it (mainly crash recovery \n> issues) were also squashed in recent years.\n> \n\nMy somewhat negative experiences have been:\n\n* Metadata operations are a bit slow, this manifests itself mostly with lots of small files being updated or deleted.\n* Improper use of the file system or hardware configuration will likely break worse (ext3 'ordered' mode makes poorly written apps safer).\n* At least with CentOS 5.3 and thier xfs version (non-Redhat, CentOS extras) sparse random writes could almost hang a file system. They were VERY slow. I have not tested since. \n\nNone of the above affect Postgres.\n\nI'm also not sure how up to date RedHat's xfs version is -- there have been enhancements to xfs in the kernel mainline regularly for a long time.\n\nIn non-postgres contexts, I've grown to appreciate some other qualities: Unlike ext2/3, I can have more than 32K directories in another directory -- XFS will do millions, though it will slow down at least it doesn't just throw an error to the application. And although XFS is slow to delete lots of small things, it can delete large files much faster -- I deal with lots of large files and it is comical to see ext3 take a minute to delete a 30GB file while XFS does it almost instantly.\n\nI have been happy with XFS for Postgres data directories, and ext2 for a dedicated xlog partition. Although I have not risked the online defragmentation on a live DB, I have defragmented a 8TB DB during maintenance and seen the performance improve.\n\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n",
"msg_date": "Tue, 9 Mar 2010 16:39:25 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "\nOn Mar 9, 2010, at 4:39 PM, Scott Carey wrote:\n\n> \n> On Mar 8, 2010, at 11:00 PM, Greg Smith wrote:\n> \n> * At least with CentOS 5.3 and thier xfs version (non-Redhat, CentOS extras) sparse random writes could almost hang a file system. They were VERY slow. I have not tested since. \n> \n\nJust to be clear, I mean random writes to a _sparse file_.\n\nYou can cause this condition with the 'fio' tool, which will by default allocate a file for write as a sparse file, then write to it. If the whole thing is written to first, then random writes are fine. Postgres only writes random when it overwrites a page, otherwise its always an append operation AFAIK.\n\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 9 Mar 2010 16:47:08 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
},
{
"msg_contents": "Scott Carey wrote:\n> I'm also not sure how up to date RedHat's xfs version is -- there have been enhancements to xfs in the kernel mainline regularly for a long time.\n> \n\nThey seem to following SGI's XFS repo quite carefully and cherry-picking \nbug fixes out of there, not sure of how that relates to mainline kernel \ndevelopment right now. For example:\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=509902 (July 2009 SGI \ncommit, now active for RHEL5.4)\nhttps://bugzilla.redhat.com/show_bug.cgi?id=544349 (November 2009 SGI \ncommit, may be merged into RHEL5.5 currently in beta)\n\nFar as I've been able to tell this is all being driven wanting >16TB \nlarge filesystems, i.e. \nhttps://bugzilla.redhat.com/show_bug.cgi?id=213744 , and the whole thing \nwill be completely mainstream (bundled into the installer, and hopefully \nwith 32-bit support available) by RHEL6: \nhttps://bugzilla.redhat.com/show_bug.cgi?id=522180\n\nThanks for the comments. From all the info I've been able to gather, \n\"works fine for what PostgreSQL does with the filesystem, not \nnecessarily suitable for your root volume\" seems to be a fair \ncharacterization of where XFS is at right now. Which is \nreasonable--that's the context I'm getting more requests to use it in, \njust as the filesystem for where the database lives. Those who don't \nhave a separate volume and filesystem for the db also tend not to care \nabout filesystem performance differences either.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 09 Mar 2010 21:32:38 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10K vs 15k rpm for analytics"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a FusionIO drive to test for a few days. I already ran iozone and\nbonnie++ against it. Does anyone have more suggestions for it?\n\nIt is a single drive (unfortunately).\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Mon, 08 Mar 2010 16:41:41 +0200",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Testing FusionIO"
},
{
"msg_contents": "2010/3/8 Devrim GÜNDÜZ <[email protected]>:\n> Hi,\n>\n> I have a FusionIO drive to test for a few days. I already ran iozone and\n> bonnie++ against it. Does anyone have more suggestions for it?\n>\n> It is a single drive (unfortunately).\n\nvdbench\n\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n",
"msg_date": "Mon, 8 Mar 2010 15:53:41 +0100",
"msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "Devrim GÜNDÜZ wrote:\n> Hi,\n>\n> I have a FusionIO drive\nCool!!\n> to test for a few days. I already ran iozone and\n> bonnie++ against it. Does anyone have more suggestions for it?\n> \nOracle has a tool to test drives specifically for database loads kinds \ncalled orion - its free software and comes with a good manual. Download \nwithout registration etc at \nhttp://www.oracle.com/technology/software/tech/orion/index.html\n\nQuickstart\n\ncreate file named named 'fusion.lun' with the device name, e.g.\n/dev/sda1\n\nInvoke orion tool with something like\n<orion binary> -run advanced -testname fusion -num_disks 50 -size_small \n4 -size_large 1024 -type rand -simulate concat -verbose -write 25 \n-duration 15 -matrix detailed -cache_size 256\n\ncache size is in MB's but not so important for random io.\nnum disks doesn't have to match physical disks but it's used by the tool \nto determine how large the test matrix should be. E.g. 1 disk gives a \nsmall matrix with small number of concurrent io requests. So I set it to 50.\n\nAnother idea: pgbench?\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Mon, 08 Mar 2010 15:55:52 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test. \n\nI also have serious questions about their longevity and failure mode when the flash finally burns out. Our hardware guys claim they have overbuilt the amount of flash on the card to be able to do their heavy writes for >5 years, but I remain skeptical. \n\nOn Mar 8, 2010, at 6:41 AM, Devrim GÜNDÜZ wrote:\n\n> Hi,\n> \n> I have a FusionIO drive to test for a few days. I already ran iozone and\n> bonnie++ against it. Does anyone have more suggestions for it?\n> \n> It is a single drive (unfortunately).\n> \n> Regards,\n> -- \n> Devrim GÜNDÜZ\n> PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\n> PostgreSQL RPM Repository: http://yum.pgrpms.org\n> Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\n> http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz\n\n",
"msg_date": "Mon, 8 Mar 2010 09:38:55 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "Ben Chobot wrote:\n> We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test. \n> \n\nYeah...I got into an argument with Kenny Gorman over my concerns with \nhow they were handling durability issues on his blog, the reading I did \nabout them never left me satisfied Fusion was being completely straight \nwith everyone about this area: http://www.kennygorman.com/wordpress/?p=398\n\nIf it takes 30 minutes to recover, but it does recover, I guess that's \nbetter than I feared was the case with them. Thanks for reporting the \nplug pull tests--I don't trust any report from anyone about new storage \nhardware that doesn't include that little detail as part of the \ntesting. You're just asking to have your data get lost without that \nbasic due diligence, and I'm sure not going to even buy eval hardware \nfrom a vendor that appears evasive about it. There's a reason I don't \npersonally own any SSD hardware yet.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 08 Mar 2010 15:50:36 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Mar 8, 2010, at 12:50 PM, Greg Smith wrote:\n\n> Ben Chobot wrote:\n>> We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test. \n> \n> Yeah...I got into an argument with Kenny Gorman over my concerns with how they were handling durability issues on his blog, the reading I did about them never left me satisfied Fusion was being completely straight with everyone about this area: http://www.kennygorman.com/wordpress/?p=398\n> \n> If it takes 30 minutes to recover, but it does recover, I guess that's better than I feared was the case with them. Thanks for reporting the plug pull tests--I don't trust any report from anyone about new storage hardware that doesn't include that little detail as part of the testing. You're just asking to have your data get lost without that basic due diligence, and I'm sure not going to even buy eval hardware from a vendor that appears evasive about it. There's a reason I don't personally own any SSD hardware yet.\n\nOf course, the plug pull test can never be conclusive, but we never lost any data the handful of times we did it. Normally we'd do it more, but with such a long reboot cycle....\n\nBut from everything we can tell, FusionIO does do reliability right.",
"msg_date": "Mon, 8 Mar 2010 13:09:56 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:\n> We've enjoyed our FusionIO drives very much. They can do 100k iops\n> without breaking a sweat.\n\nYeah, performance is excellent. I bet we could get more, but CPU was\nbottleneck in our test, since it was just a demo server :(\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Wed, 17 Mar 2010 14:30:58 +0200",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Wed, 2010-03-17 at 14:30 +0200, Devrim GÜNDÜZ wrote:\n> On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:\n> > We've enjoyed our FusionIO drives very much. They can do 100k iops\n> > without breaking a sweat.\n> \n> Yeah, performance is excellent. I bet we could get more, but CPU was\n> bottleneck in our test, since it was just a demo server :(\n\nDid you test the drive in all three modes? If so, what sort of\ndifferences did you see.\n\nI've been hearing bad things from some folks about the quality of the\nFusionIO drives from a durability standpoint. I'm Unsure if this is\nvendor specific bias or not, but considering the source (which not\nvendor specific), I don't think so. \n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 17 Mar 2010 09:03:28 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:\n\n> I've been hearing bad things from some folks about the quality of the\n> FusionIO drives from a durability standpoint.\n\nCan you be more specific about that? Durability over what time frame? How many devices in the sample set? How did FusionIO deal with the issue?",
"msg_date": "Wed, 17 Mar 2010 09:11:15 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:\n> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:\n> \n> > I've been hearing bad things from some folks about the quality of the\n> > FusionIO drives from a durability standpoint.\n> \n> Can you be more specific about that? Durability over what time frame? How many devices in the sample set? How did FusionIO deal with the issue?\n\nI didn't get any specifics - as we are looking at other products. It\ndid center around how FusionIO did wear-leveling though. \n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 17 Mar 2010 09:18:20 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear levels across 100GB of actual installed capacity. \n\nhttp://community.fusionio.com/forums/p/34/258.aspx#258\n\nMax drive performance would be about 41TB/day, which coincidently works out very close to the 3 year warranty they have on the devices.\n\nFusionIO's claim _seems_ credible. I'd love to see some evidence to the contrary.\n\n\nOn Mar 17, 2010, at 9:18 AM, Brad Nicholson wrote:\n\n> On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:\n>> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:\n>> \n>>> I've been hearing bad things from some folks about the quality of the\n>>> FusionIO drives from a durability standpoint.\n>> \n>> Can you be more specific about that? Durability over what time frame? How many devices in the sample set? How did FusionIO deal with the issue?\n> \n> I didn't get any specifics - as we are looking at other products. It\n> did center around how FusionIO did wear-leveling though. \n> -- \n> Brad Nicholson 416-673-4106\n> Database Administrator, Afilias Canada Corp.\n> \n> \n\n",
"msg_date": "Wed, 17 Mar 2010 09:52:26 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:\n> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear levels across 100GB of actual installed capacity. \n> http://community.fusionio.com/forums/p/34/258.aspx#258\n> \n\n20% of overall capacity free for levelling doesn't strike me as a lot.\nSome of the Enterprise grade stuff we are looking into (like TMS RamSan)\nleaves 40% (with much larger overall capacity).\n\nAlso, running that drive at 80GB is the \"Maximum Capacity\" mode, which\ndecreases the write performance.\n\n> Max drive performance would be about 41TB/day, which coincidently works out very close to the 3 year warranty they have on the devices.\n> \n\nTo counter that:\n\nhttp://www.tomshardware.com/reviews/fusioinio-iodrive-flash,2140-2.html\n\n\"Fusion-io’s wear leveling algorithm is based on a cycle of 5 TB\nwrite/erase volume per day, resulting in 24 years run time for the 80 GB\nmodel, 48 years for the 160 GB version and 16 years for the MLC-based\n320 GB type. However, since 5 TB could be written or erased rather\nquickly given the performance level, we recommend not relying on these\napproximations too much.\"\n\n\n> FusionIO's claim _seems_ credible. I'd love to see some evidence to the contrary.\n\nVendor claims always seem credible. The key is to separate the\nmarketing hype from the actual details.\n\nAgain, I'm just passing along what I heard - which was from a\nvendor-neutral, major storage consulting firm that decided to stop\nrecommending these drives to clients. Make of that what you will.\n\nAs an aside, some folks in our Systems Engineering department here did\ndo some testing of FusionIO, and they found that the helper daemons were\ninefficient and placed a fair amount of load on the server. That might\nbe something to watch of for for those that are testing them.\n\n> \n> On Mar 17, 2010, at 9:18 AM, Brad Nicholson wrote:\n> \n> > On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:\n> >> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:\n> >> \n> >>> I've been hearing bad things from some folks about the quality of the\n> >>> FusionIO drives from a durability standpoint.\n> >> \n> >> Can you be more specific about that? Durability over what time frame? How many devices in the sample set? How did FusionIO deal with the issue?\n> > \n> > I didn't get any specifics - as we are looking at other products. It\n> > did center around how FusionIO did wear-leveling though. \n> > -- \n> > Brad Nicholson 416-673-4106\n> > Database Administrator, Afilias Canada Corp.\n> > \n> > \n> \n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 17 Mar 2010 10:41:31 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "\nOn Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:\n\n> On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:\n>> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear levels across 100GB of actual installed capacity. \n>> http://community.fusionio.com/forums/p/34/258.aspx#258\n>> \n> \n> 20% of overall capacity free for levelling doesn't strike me as a lot.\n\nI don't have any idea how to judge what amount would be right.\n\n> Some of the Enterprise grade stuff we are looking into (like TMS RamSan)\n> leaves 40% (with much larger overall capacity).\n> \n> Also, running that drive at 80GB is the \"Maximum Capacity\" mode, which\n> decreases the write performance.\n\nVery fair. In my favor, my proposed use case is probably at half capacity or less. I am getting the impression that partitioning/formatting the drive for the intended usage, and not the max capacity, is the way to go. Capacity isn't an issue with this workload. I cannot fit enough drives into these servers to get a tenth of the IOPS that even Tom's documents the ioDrive is capable of at reduced performance levels.\n\n>> Max drive performance would be about 41TB/day, which coincidently works out very close to the 3 year warranty they have on the devices.\n>> \n> \n> To counter that:\n> \n> http://www.tomshardware.com/reviews/fusioinio-iodrive-flash,2140-2.html\n> \n> \"Fusion-io’s wear leveling algorithm is based on a cycle of 5 TB\n> write/erase volume per day, resulting in 24 years run time for the 80 GB\n> model, 48 years for the 160 GB version and 16 years for the MLC-based\n> 320 GB type. However, since 5 TB could be written or erased rather\n> quickly given the performance level, we recommend not relying on these\n> approximations too much.\"\n> \n\nI'm not sure if that is a counter or a supporting claim :) \n\n> \n>> FusionIO's claim _seems_ credible. I'd love to see some evidence to the contrary.\n> \n> Vendor claims always seem credible. The key is to separate the\n> marketing hype from the actual details.\n\nI'm hoping to get my hands on a sample in the next few weeks. \n\n> \n> Again, I'm just passing along what I heard - which was from a\n> vendor-neutral, major storage consulting firm that decided to stop\n> recommending these drives to clients. Make of that what you will.\n> \n> As an aside, some folks in our Systems Engineering department here did\n> do some testing of FusionIO, and they found that the helper daemons were\n> inefficient and placed a fair amount of load on the server. That might\n> be something to watch of for for those that are testing them.\n> \n\nThat is a wonderful little nugget of knowledge that I shall put on my test plan.\n\n",
"msg_date": "Wed, 17 Mar 2010 14:11:56 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "Greg,\n\nDid you ever contact them and get your hands on one?\n\nWe eventually did see long SSD rebuild times on server crash as well. But data came back uncorrupted per my blog post. This is a good case for Slony Slaves. Anyone in a high TX low downtime environment would have already engineered around needing to wait for rebuild/recover times anyway. So it's not a deal killer in my view.\n\n-kg\n\nOn Mar 8, 2010, at 12:50 PM, Greg Smith wrote:\n\n> Ben Chobot wrote:\n>> We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test. \n> \n> Yeah...I got into an argument with Kenny Gorman over my concerns with how they were handling durability issues on his blog, the reading I did about them never left me satisfied Fusion was being completely straight with everyone about this area: http://www.kennygorman.com/wordpress/?p=398\n> \n> If it takes 30 minutes to recover, but it does recover, I guess that's better than I feared was the case with them. Thanks for reporting the plug pull tests--I don't trust any report from anyone about new storage hardware that doesn't include that little detail as part of the testing. You're just asking to have your data get lost without that basic due diligence, and I'm sure not going to even buy eval hardware from a vendor that appears evasive about it. There's a reason I don't personally own any SSD hardware yet.\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 17 Mar 2010 12:23:53 -0700",
"msg_from": "Kenny Gorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:\n> On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:\n> \n> > On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:\n> >> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear levels across 100GB of actual installed capacity. \n> >> http://community.fusionio.com/forums/p/34/258.aspx#258\n> >> \n> > \n> > 20% of overall capacity free for levelling doesn't strike me as a lot.\n> \n> I don't have any idea how to judge what amount would be right.\n> \n> > Some of the Enterprise grade stuff we are looking into (like TMS RamSan)\n> > leaves 40% (with much larger overall capacity).\n> > \n> > Also, running that drive at 80GB is the \"Maximum Capacity\" mode, which\n> > decreases the write performance.\n> \n> Very fair. In my favor, my proposed use case is probably at half capacity or less. I am getting the impression that partitioning/formatting the drive for the intended usage, and not the max capacity, is the way to go. Capacity isn't an issue with this workload. I cannot fit enough drives into these servers to get a tenth of the IOPS that even Tom's documents the ioDrive is capable of at reduced performance levels.\n\n\nThe actual media is only good for a very limited number of write cycles. The way that the drives get around to be reliable is to \nconstantly write to different areas. The more you have free, the less you have to re-use, the longer the lifespan.\n\nThis is done by the drives wear levelling algorithms, not by using\npartitioning utilities btw.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 17 Mar 2010 16:01:56 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Wed, 17 Mar 2010, Brad Nicholson wrote:\n\n> On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:\n>> On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:\n>>\n>>> On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:\n>>>> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear levels across 100GB of actual installed capacity.\n>>>> http://community.fusionio.com/forums/p/34/258.aspx#258\n>>>>\n>>>\n>>> 20% of overall capacity free for levelling doesn't strike me as a lot.\n>>\n>> I don't have any idea how to judge what amount would be right.\n>>\n>>> Some of the Enterprise grade stuff we are looking into (like TMS RamSan)\n>>> leaves 40% (with much larger overall capacity).\n>>>\n>>> Also, running that drive at 80GB is the \"Maximum Capacity\" mode, which\n>>> decreases the write performance.\n>>\n>> Very fair. In my favor, my proposed use case is probably at half capacity or less. I am getting the impression that partitioning/formatting the drive for the intended usage, and not the max capacity, is the way to go. Capacity isn't an issue with this workload. I cannot fit enough drives into these servers to get a tenth of the IOPS that even Tom's documents the ioDrive is capable of at reduced performance levels.\n>\n>\n> The actual media is only good for a very limited number of write cycles. The way that the drives get around to be reliable is to\n> constantly write to different areas. The more you have free, the less you have to re-use, the longer the lifespan.\n>\n> This is done by the drives wear levelling algorithms, not by using\n> partitioning utilities btw.\n\ntrue, but if the drive is partitioned so that parts of it are never \nwritten to by the OS, the drive knows that those parts don't contain data \nand so can treat them as unallocated.\n\nonce the OS writes to a part of the drive, unless the OS issues a trim \ncommand the drive can't know that the data there is worthless and can be \nignored, it has to try and preserve that data, which makes doing the wear \nleveling harder and slower.\n\nDavid Lang\n",
"msg_date": "Wed, 17 Mar 2010 13:14:19 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
},
{
"msg_contents": "On Mar 17, 2010, at 7:41 AM, Brad Nicholson wrote:\n\n> As an aside, some folks in our Systems Engineering department here did\n> do some testing of FusionIO, and they found that the helper daemons were\n> inefficient and placed a fair amount of load on the server. That might\n> be something to watch of for for those that are testing them.\n\nAs another anecdote, we have 4 of the 160GB cards in a 24-core Istanbul server. I don't know how efficient the helper daemons are, but they do take up about half of one core's cycles, regardless of how busy the box actually is. So that sounds \"bad\".... until you take into account how much that one core costs, and compare it to how much it would cost to have the same amount of IOPs in a different form. ",
"msg_date": "Wed, 17 Mar 2010 15:53:43 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing FusionIO"
}
] |
[
{
"msg_contents": "Hi all;\n\nwe've found that partition elimination is not happening for a prepared \nstatement, however running the same statement in psql manually does give us \npartition elimination.\n\nIs this a known issue?\n\n",
"msg_date": "Mon, 8 Mar 2010 10:24:56 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "prepared statements and partitioning (partition elimination not\n\tworking)"
},
{
"msg_contents": "On Mon, Mar 08, 2010 at 10:24:56AM -0700, Kevin Kempter wrote:\n> Hi all;\n> \n> we've found that partition elimination is not happening for a prepared \n> statement, however running the same statement in psql manually does give us \n> partition elimination.\n> \n> Is this a known issue?\n> \n\nYes, see the recent threads on performance of prepared queries. \nIt concerns the availability of information on the query inputs\nthat is available to psql and not a pre-prepared query.\n\nCheers,\nKen\n",
"msg_date": "Mon, 8 Mar 2010 11:31:43 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: prepared statements and partitioning (partition\n\telimination not working)"
}
] |
[
{
"msg_contents": "Hello,\n\nI am evaluating a materialized view implemented as partitioned table.\nAt the moment the table is partitioned yearly and contains 5\nnumeric/timestamp columns. One of the columns is ID (but it's not what\nthe table is partitioned on).\n\nPartition for one year occupies about 1200 MB. Each of the columns is\nindexed, with each index weighing about 160 MB. I am trying to avoid\nRAM/disk thrashing. Now I have the following questions:\n\n1. When I query the table by ID, it performs index scan on each\npartition. The result is only found in one partition, but I understand\nwhy it needs to look in all of them. How much disk reading does it\ninvolve? Is only the \"head\" of indexes for partitions that do not\ninclude the row scanned, or are always whole indexes read? I would\nlike to know the general rule for index scans.\n\n2. Is it possible to tell which PG objects are read from disk (because\nthey were not found in RAM)?\n\nThank you.\n\n-- \nKonrad Garus\n",
"msg_date": "Mon, 8 Mar 2010 19:28:30 +0100",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Paritioning vs. caching"
},
{
"msg_contents": "If the partitioned column in your where clause does not use hardcoded\nvalues ...e.g datecolumn between 'year1' and 'year2' ..the query\nplanner will check all partitions ..this is a known issue with the\noptimizer\n\nOn Mon, Mar 8, 2010 at 10:28 AM, Konrad Garus <[email protected]> wrote:\n> Hello,\n>\n> I am evaluating a materialized view implemented as partitioned table.\n> At the moment the table is partitioned yearly and contains 5\n> numeric/timestamp columns. One of the columns is ID (but it's not what\n> the table is partitioned on).\n>\n> Partition for one year occupies about 1200 MB. Each of the columns is\n> indexed, with each index weighing about 160 MB. I am trying to avoid\n> RAM/disk thrashing. Now I have the following questions:\n>\n> 1. When I query the table by ID, it performs index scan on each\n> partition. The result is only found in one partition, but I understand\n> why it needs to look in all of them. How much disk reading does it\n> involve? Is only the \"head\" of indexes for partitions that do not\n> include the row scanned, or are always whole indexes read? I would\n> like to know the general rule for index scans.\n>\n> 2. Is it possible to tell which PG objects are read from disk (because\n> they were not found in RAM)?\n>\n> Thank you.\n>\n> --\n> Konrad Garus\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 8 Mar 2010 11:27:00 -0800",
"msg_from": "Anj Adu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paritioning vs. caching"
},
{
"msg_contents": "\n> 1. When I query the table by ID, it performs index scan on each\n> partition. The result is only found in one partition, but I understand\n> why it needs to look in all of them. How much disk reading does it\n> involve? Is only the \"head\" of indexes for partitions that do not\n> include the row scanned, or are always whole indexes read? I would\n> like to know the general rule for index scans.\n\nIf you're not including the partition criterion in most of your queries,\nyou're probably partitioning on the wrong value.\n\n--Josh Berkus\n",
"msg_date": "Mon, 08 Mar 2010 12:30:03 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paritioning vs. caching"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe have installed postgres 8.4.2 on production.\n\nWe have a parition table structure for one of the table.\n\nwhen i am drop the master table we get the following error.\n\ndrop table table_name cascade;\nWARNING: out of shared memory\nERROR: out of shared memory\nHINT: You might need to increase max_locks_per_transaction.\n\nFYI : This sql was working fine on Postgres 8.3\n\nCan some one please let me know what is going wrong here.\n\nRegards\nVidhya\n\nHi All,\n \nWe have installed postgres 8.4.2 on production.\n \nWe have a parition table structure for one of the table. \n \nwhen i am drop the master table we get the following error.\n \ndrop table table_name cascade;WARNING: out of shared memoryERROR: out of shared memoryHINT: You might need to increase max_locks_per_transaction.\n \nFYI : This sql was working fine on Postgres 8.3\n \nCan some one please let me know what is going wrong here.\n \nRegards\nVidhya",
"msg_date": "Tue, 9 Mar 2010 15:08:53 +0530",
"msg_from": "Vidhya Bondre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of shared memory in postgres 8.4.2 and locks"
},
{
"msg_contents": "On Tue, Mar 9, 2010 at 4:38 AM, Vidhya Bondre <[email protected]> wrote:\n> Hi All,\n>\n> We have installed postgres 8.4.2 on production.\n>\n> We have a parition table structure for one of the table.\n>\n> when i am drop the master table we get the following error.\n>\n> drop table table_name cascade;\n> WARNING: out of shared memory\n> ERROR: out of shared memory\n> HINT: You might need to increase max_locks_per_transaction.\n>\n> FYI : This sql was working fine on Postgres 8.3\n>\n> Can some one please let me know what is going wrong here.\n\nare you using the same postgresql.conf? have you created more\npartitions? using advisory locks?\n\nIn any event, increase the max_locks_per_transaction setting and\nrestart the database.\n\nmerlin\n",
"msg_date": "Tue, 9 Mar 2010 07:34:11 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of shared memory in postgres 8.4.2 and locks"
},
{
"msg_contents": "On Tue, Mar 9, 2010 at 6:04 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Mar 9, 2010 at 4:38 AM, Vidhya Bondre <[email protected]>\n> wrote:\n> > Hi All,\n> >\n> > We have installed postgres 8.4.2 on production.\n> >\n> > We have a parition table structure for one of the table.\n> >\n> > when i am drop the master table we get the following error.\n> >\n> > drop table table_name cascade;\n> > WARNING: out of shared memory\n> > ERROR: out of shared memory\n> > HINT: You might need to increase max_locks_per_transaction.\n> >\n> > FYI : This sql was working fine on Postgres 8.3\n> >\n> > Can some one please let me know what is going wrong here.\n>\n> are you using the same postgresql.conf? have you created more\n> partitions? using advisory locks?\n>\nYes we are using same conf files. In a week we create around 5 partitions.\nWe are not using advisory locks\n\n>\n> In any event, increase the max_locks_per_transaction setting and\n> restart the database.\n>\n\nCurrently the value of max_locks_per_transaction is 64 modifying it to 96\nworks.\nHave a couple of questions\n1] As and when we add partitions will we have to increase this parameter ?\n2] will we have to consider any othe parameter twick while increasing this\none?\n\n\n>\n> merlin\n>\n\n\nOn Tue, Mar 9, 2010 at 6:04 PM, Merlin Moncure <[email protected]> wrote:\n\n\n\nOn Tue, Mar 9, 2010 at 4:38 AM, Vidhya Bondre <[email protected]> wrote:> Hi All,>> We have installed postgres 8.4.2 on production.\n>> We have a parition table structure for one of the table.>> when i am drop the master table we get the following error.>> drop table table_name cascade;> WARNING: out of shared memory\n> ERROR: out of shared memory> HINT: You might need to increase max_locks_per_transaction.>> FYI : This sql was working fine on Postgres 8.3>> Can some one please let me know what is going wrong here.\nare you using the same postgresql.conf? have you created morepartitions? using advisory locks?\nYes we are using same conf files. In a week we create around 5 partitions. We are not using advisory locks\nIn any event, increase the max_locks_per_transaction setting andrestart the database.\n \nCurrently the value of max_locks_per_transaction is 64 modifying it to 96 works.\nHave a couple of questions \n1] As and when we add partitions will we have to increase this parameter ?\n2] will we have to consider any othe parameter twick while increasing this one?\n \nmerlin",
"msg_date": "Tue, 9 Mar 2010 18:57:17 +0530",
"msg_from": "Vidhya Bondre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of shared memory in postgres 8.4.2 and locks"
},
{
"msg_contents": "On Tue, Mar 9, 2010 at 8:27 AM, Vidhya Bondre <[email protected]> wrote:\n>>\n>> are you using the same postgresql.conf? have you created more\n>> partitions? using advisory locks?\n>\n> Yes we are using same conf files. In a week we create around 5 partitions.\n> We are not using advisory locks\n>>\n>> In any event, increase the max_locks_per_transaction setting and\n>> restart the database.\n>\n>\n> Currently the value of max_locks_per_transaction is 64 modifying it to 96\n> works.\n> Have a couple of questions\n> 1] As and when we add partitions will we have to increase this parameter ?\n> 2] will we have to consider any othe parameter twick while increasing this\n> one?\n\nIf you are doing 'in transaction' operations that involve a lot of\ntables this figure has to be bumped, sometimes significantly. You\nmight be tempted just crank it, and be done with this problem. If\nso, be advised of what happens when you do:\n\n*) more shared memory usage (make sure you have memory and\nshared_buffers is appropriately set). however for what you get the\nusage is relatively modest.\n*) anything that scans the entire in memory lock table takes longer in\nrelationship to this .conf value. AFAIK, the only noteworthy thing\nthat does this is the pg_locks view.\n\nThe 'other' big shared memory tradeoff you historically had to deal\nwith, the fsm map, is gone in 8.4.\n\nmerlin\n",
"msg_date": "Tue, 9 Mar 2010 10:03:16 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of shared memory in postgres 8.4.2 and locks"
}
] |
[
{
"msg_contents": "Dear PostgreSQL Creators, I am frequently using PostgreSQL server to manage the data, but I am stuck ed now with a problem of large objects deleting, namely it works too slow. E.g., deleting of 900 large objects of 1 Mb size takes around 2.31 minutes. This dataset is not largest one which I am working with, e.g. deleting of 3000x1Mb objects takes around half an hour. Could you, please, give me a few advices what do I have to do to improve the deleting time? (i've tried to extend the memory server uses, but results are the same) Best regards!\n\n\n \nDear PostgreSQL Creators, I am frequently using PostgreSQL server to manage the data, but I am stuck ed now with a problem of large objects deleting, namely it works too slow. E.g., deleting of 900 large objects of 1 Mb size takes around 2.31 minutes. This dataset is not largest one which I am working with, e.g. deleting of 3000x1Mb objects takes around half an hour. Could you, please, give me a few\n advices what do I have to do to improve the deleting time? (i've tried to extend the memory server uses, but results are the same) Best regards!",
"msg_date": "Tue, 9 Mar 2010 10:44:50 -0800 (PST)",
"msg_from": "John KEA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deleting Large Objects"
},
{
"msg_contents": "John KEA <[email protected]> wrote:\n \n> I am stuck ed now with a problem of large objects deleting, \n \n> please, give me a few advices what do I have to do to improve the\n> deleting time?\n \nYou've come to the right place, but we need more information to be\nable to help. Please review this page and repost with the suggested\ninformation.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Tue, 09 Mar 2010 13:28:36 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Large Objects"
}
] |
[
{
"msg_contents": "Hi group,\n\n\nWe have two related tables with event types and events. We query for a join\nbetween these two tables and experience that, when there is an\nto-be-expected very small result set, this query performs particularly\npoor. Understanding in this matter would be appreciated.\n\nSELECT * from events_event_types WHERE id IN (71,999);\n id | name | severity\n----+------------------------+----------\n 71 | Xenteo Payment handled | 20\n(1 row)\n\n\nFollowing original query returns zero rows (as to be expected on what I\nshowed above) and takes (relatively) a lot of time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 397.564 ms\n\nFollowing query is much alike the original query, but I changed the \"WHERE\nseverity\". It returns the number of rows are requested in LIMIT and takes\nonly little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id IN\n(71)) ORDER BY datetime DESC limit 50;\n...\n(50 rows)\nTime: 1.604 ms\n\nThe latter much to prove that this is a problem related to small result\nsets.\n\nFollowing query is much alike the original query, although I've added a\ndummy value (non-existent in event types table; \"999\") to the WHERE IN\nclause. It returns the same zero rows and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71, 999)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 1.340 ms\n\nNow I have at least two possibilities:\n- Implementing the dummy value as shown above in my source code to improve\nquery performance (dirty but effective)\n- Further investigating what is going on, which at this point is something\nI need help with\nThanks for your assistance in this matter!\n\n\nFollowing are a number of details to describe the environment that this is\nseen in.\n\nSELECT version();\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n(Ubuntu 4.2.4-1ubuntu3)\n\nPostgres was installed as Debian package in Ubuntu 8.04 LTS.\n\nSELECT count(*) FROM events_events;\n7619991\nSELECT count(*) FROM events_events WHERE eventtype_id=71;\n50348\nSELECT count(*) FROM events_event_types;\n82\n\n\\d events_event_types\n Table \"public.events_event_types\"\n Column | Type | Modifiers\n----------+------------------------+-----------------------------------------------------------------\n id | bigint | not null default nextval\n('events_event_types_id_seq'::regclass)\n name | character varying(255) | not null\n severity | bigint | not null\nIndexes:\n \"events_event_types_pkey\" PRIMARY KEY, btree (id)\n \"events_event_types_name_key\" UNIQUE, btree (name)\n \"events_event_types_severity_ind\" btree (severity)\n \"test_events_eventtypes_id_severity_ind\" btree (id, severity)\n \"test_events_eventtypes_severity_id_ind\" btree (severity, id)\n\n\\d events_events\n Table \"public.events_events\"\n Column | Type |\nModifiers\n----------------------+--------------------------+------------------------------------------------------------\n id | bigint | not null default nextval\n('events_events_id_seq'::regclass)\n carparkid | bigint |\n cleared | boolean | not null\n datetime | timestamp with time zone |\n identity | character varying(255) |\n generatedbystationid | bigint |\n eventtype_id | bigint | not null\n relatedstationid | bigint |\n processingstatus | character varying(255) | not null\nIndexes:\n \"events_events_pkey\" PRIMARY KEY, btree (id)\n \"events_events_cleared_ind\" btree (cleared)\n \"events_events_datetime_eventtype_id_ind\" btree (datetime,\neventtype_id)\n \"events_events_datetime_ind\" btree (datetime)\n \"events_events_eventtype_id_datetime_ind\" btree (eventtype_id,\ndatetime)\n \"events_events_eventtype_id_ind\" btree (eventtype_id)\n \"events_events_identity_ind\" btree (identity)\n \"events_events_not_cleared_ind\" btree (cleared) WHERE NOT cleared\n \"events_events_processingstatus_new\" btree (processingstatus) WHERE\nprocessingstatus::text = 'NEW'::text\n \"test2_events_events_eventtype_id_severity_ind\" btree (datetime,\neventtype_id, cleared)\n \"test3_events_events_eventtype_id_severity_ind\" btree (cleared,\ndatetime, eventtype_id)\n \"test4_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared, eventtype_id)\n \"test5_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared)\n \"test_events_events_eventtype_id_severity_ind\" btree (eventtype_id,\ncleared)\nForeign-key constraints:\n \"fk88fe3effa0559276\" FOREIGN KEY (eventtype_id) REFERENCES\nevents_event_types(id)\n\nGroeten, best regards,\n\n\nSander Verhagen\n\nHi group,\n\n\nWe have two related tables with event types and events. We query for a join between these two tables and experience that, when there is an to-be-expected very small result set, this query performs particularly poor. Understanding in this matter would be appreciated.\n\nSELECT * from events_event_types WHERE id IN (71,999);\n id | name | severity\n----+------------------------+----------\n 71 | Xenteo Payment handled | 20\n(1 row)\n\n\nFollowing original query returns zero rows (as to be expected on what I showed above) and takes (relatively) a lot of time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN (71)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid | eventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 397.564 ms\n\nFollowing query is much alike the original query, but I changed the \"WHERE severity\". It returns the number of rows are requested in LIMIT and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON eventType_id=events_event_types.id WHERE severity=20 AND (eventType_id IN (71)) ORDER BY datetime DESC limit 50;\n...\n(50 rows)\nTime: 1.604 ms\n\nThe latter much to prove that this is a problem related to small result sets.\n\nFollowing query is much alike the original query, although I've added a dummy value (non-existent in event types table; \"999\") to the WHERE IN clause. It returns the same zero rows and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid | eventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 1.340 ms\n\nNow I have at least two possibilities:\n- Implementing the dummy value as shown above in my source code to improve query performance (dirty but effective)\n- Further investigating what is going on, which at this point is something I need help with\nThanks for your assistance in this matter!\n\n\nFollowing are a number of details to describe the environment that this is seen in.\n\nSELECT version();\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu3)\n\nPostgres was installed as Debian package in Ubuntu 8.04 LTS.\n\nSELECT count(*) FROM events_events;\n7619991\nSELECT count(*) FROM events_events WHERE eventtype_id=71;\n50348\nSELECT count(*) FROM events_event_types;\n82\n\n\\d events_event_types\n Table \"public.events_event_types\"\n Column | Type | Modifiers\n----------+------------------------+-----------------------------------------------------------------\n id | bigint | not null default nextval('events_event_types_id_seq'::regclass)\n name | character varying(255) | not null\n severity | bigint | not null\nIndexes:\n \"events_event_types_pkey\" PRIMARY KEY, btree (id)\n \"events_event_types_name_key\" UNIQUE, btree (name)\n \"events_event_types_severity_ind\" btree (severity)\n \"test_events_eventtypes_id_severity_ind\" btree (id, severity)\n \"test_events_eventtypes_severity_id_ind\" btree (severity, id)\n\n\\d events_events\n Table \"public.events_events\"\n Column | Type | Modifiers\n----------------------+--------------------------+------------------------------------------------------------\n id | bigint | not null default nextval('events_events_id_seq'::regclass)\n carparkid | bigint |\n cleared | boolean | not null\n datetime | timestamp with time zone |\n identity | character varying(255) |\n generatedbystationid | bigint |\n eventtype_id | bigint | not null\n relatedstationid | bigint |\n processingstatus | character varying(255) | not null\nIndexes:\n \"events_events_pkey\" PRIMARY KEY, btree (id)\n \"events_events_cleared_ind\" btree (cleared)\n \"events_events_datetime_eventtype_id_ind\" btree (datetime, eventtype_id)\n \"events_events_datetime_ind\" btree (datetime)\n \"events_events_eventtype_id_datetime_ind\" btree (eventtype_id, datetime)\n \"events_events_eventtype_id_ind\" btree (eventtype_id)\n \"events_events_identity_ind\" btree (identity)\n \"events_events_not_cleared_ind\" btree (cleared) WHERE NOT cleared\n \"events_events_processingstatus_new\" btree (processingstatus) WHERE processingstatus::text = 'NEW'::text\n \"test2_events_events_eventtype_id_severity_ind\" btree (datetime, eventtype_id, cleared)\n \"test3_events_events_eventtype_id_severity_ind\" btree (cleared, datetime, eventtype_id)\n \"test4_events_events_eventtype_id_severity_ind\" btree (datetime, cleared, eventtype_id)\n \"test5_events_events_eventtype_id_severity_ind\" btree (datetime, cleared)\n \"test_events_events_eventtype_id_severity_ind\" btree (eventtype_id, cleared)\nForeign-key constraints:\n \"fk88fe3effa0559276\" FOREIGN KEY (eventtype_id) REFERENCES events_event_types(id)\n\nGroeten, best regards,\n\n\nSander Verhagen",
"msg_date": "Wed, 10 Mar 2010 09:41:26 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Strange workaround for slow query"
},
{
"msg_contents": "Hello Sander,\n\nCan you post the explain plan output of these queries?\n\n> SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON \n> eventType_id=events_event_types.id WHERE severity=20 AND (eventType_id \n> IN (71)) ORDER BY datetime DESC limit 50;\n>\n> SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON \n> eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id \n> IN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n>\nregards\nYeb Havinga\n",
"msg_date": "Wed, 10 Mar 2010 10:11:05 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "Hi,\n\n\nEXPLAIN SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\nIN (71)) ORDER BY datetime DESC limit 50;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.23..200.31 rows=50 width=131)\n -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131)\n -> Index Scan Backward using\nevents_events_eventtype_id_datetime_ind on events_events\n(cost=0.00..48886.61 rows=12466 width=93)\n Index Cond: (eventtype_id = 71)\n -> Materialize (cost=3.23..3.24 rows=1 width=38)\n -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1\nwidth=38)\n Filter: ((id = 71) AND (severity = 20))\n\n\nEXPLAIN SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\nIN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=27290.24..27290.37 rows=50 width=131)\n -> Sort (cost=27290.24..27303.15 rows=5164 width=131)\n Sort Key: events_events.datetime\n -> Nested Loop (cost=22.95..27118.70 rows=5164 width=131)\n -> Seq Scan on events_event_types (cost=0.00..3.02 rows=17\nwidth=38)\n Filter: (severity = 70)\n -> Bitmap Heap Scan on events_events (cost=22.95..1589.94\nrows=408 width=93)\n Recheck Cond: ((events_events.eventtype_id = ANY\n('{71,999}'::bigint[])) AND (events_events.eventtype_id =\nevents_event_types.id))\n -> Bitmap Index Scan on\ntest_events_events_eventtype_id_severity_ind (cost=0.00..22.85 rows=408\nwidth=0)\n Index Cond: ((events_events.eventtype_id = ANY\n('{71,999}'::bigint[])) AND (events_events.eventtype_id =\nevents_event_types.id))\n\nBy the way, sorry for my colleague Kees re-posting my message, but I was\nunder the assumption that my post did not make it into the group (as we\nexperienced in the past as well).\n\nGroeten, best regards,\n\n\nSander Verhagen\n\nHi,\n\n\nEXPLAIN SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\nIN (71)) ORDER BY datetime DESC limit 50;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.23..200.31 rows=50 width=131)\n -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131)\n -> Index Scan Backward using events_events_eventtype_id_datetime_ind on events_events (cost=0.00..48886.61 rows=12466 width=93)\n Index Cond: (eventtype_id = 71)\n -> Materialize (cost=3.23..3.24 rows=1 width=38)\n -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1 width=38)\n Filter: ((id = 71) AND (severity = 20))\n\n\nEXPLAIN SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\nIN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=27290.24..27290.37 rows=50 width=131)\n -> Sort (cost=27290.24..27303.15 rows=5164 width=131)\n Sort Key: events_events.datetime\n -> Nested Loop (cost=22.95..27118.70 rows=5164 width=131)\n -> Seq Scan on events_event_types (cost=0.00..3.02 rows=17 width=38)\n Filter: (severity = 70)\n -> Bitmap Heap Scan on events_events (cost=22.95..1589.94 rows=408 width=93)\n Recheck Cond: ((events_events.eventtype_id = ANY ('{71,999}'::bigint[])) AND (events_events.eventtype_id = events_event_types.id))\n -> Bitmap Index Scan on test_events_events_eventtype_id_severity_ind (cost=0.00..22.85 rows=408 width=0)\n Index Cond: ((events_events.eventtype_id = ANY ('{71,999}'::bigint[])) AND (events_events.eventtype_id = events_event_types.id))\n\nBy the way, sorry for my colleague Kees re-posting my message, but I was under the assumption that my post did not make it into the group (as we experienced in the past as well).\n\nGroeten, best regards,\n\n\nSander Verhagen",
"msg_date": "Wed, 10 Mar 2010 10:27:40 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "[email protected] wrote:\n>\n> Hi,\n>\n>\n> EXPLAIN SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\n> eventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\n> IN (71)) ORDER BY datetime DESC limit 50;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=3.23..200.31 rows=50 width=131)\n> -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131)\n> -> Index Scan Backward using events_events_eventtype_id_datetime_ind \n> on events_events (cost=0.00..48886.61 rows=12466 width=93)\n> Index Cond: (eventtype_id = 71)\n> -> Materialize (cost=3.23..3.24 rows=1 width=38)\n> -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1 width=38)\n> Filter: ((id = 71) AND (severity = 20))\n>\n>\n> EXPLAIN SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\n> eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\n> IN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=27290.24..27290.37 rows=50 width=131)\n> -> Sort (cost=27290.24..27303.15 rows=5164 width=131)\n> Sort Key: events_events.datetime\n> -> Nested Loop (cost=22.95..27118.70 rows=5164 width=131)\n> -> Seq Scan on events_event_types (cost=0.00..3.02 rows=17 width=38)\n> Filter: (severity = 70)\n> -> Bitmap Heap Scan on events_events (cost=22.95..1589.94 rows=408 \n> width=93)\n> Recheck Cond: ((events_events.eventtype_id = ANY \n> ('{71,999}'::bigint[])) AND (events_events.eventtype_id = \n> events_event_types.id))\n> -> Bitmap Index Scan on test_events_events_eventtype_id_severity_ind \n> (cost=0.00..22.85 rows=408 width=0)\n> Index Cond: ((events_events.eventtype_id = ANY ('{71,999}'::bigint[])) \n> AND (events_events.eventtype_id = events_event_types.id))\n>\nThanks - I'm sorry that I was not more specific earlier, but what would \nbe *really* helpful is the output of explain analyze, since that also \nshows actual time, # rows and # loops of the inner nestloop. I'm \nwondering though why you do a left outer join. From the \\d output in the \nprevious mail, events_event.eventtype_id has a not null constraint and a \nfk to events_event_types.id, so an inner join would be appropriate. \nOuter joins limits the amount of join orders the planner considers, so a \nbetter plan might arise when the join is changed to inner.\n\nregards\nYeb Havinga\n\n",
"msg_date": "Wed, 10 Mar 2010 11:11:17 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "> Thanks - I'm sorry that I was not more specific earlier, but what would\n> be *really* helpful is the output of explain analyze, since that also\n> shows actual time, # rows and # loops of the inner nestloop.\n\nNo problem at all.\n\n\nEXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN\nevents_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\nIN (71)) ORDER BY datetime DESC limit 50;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.23..200.31 rows=50 width=131) (actual time=0.070..0.341\nrows=50 loops=1)\n -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual\ntime=0.069..0.309 rows=50 loops=1)\n -> Index Scan Backward using\nevents_events_eventtype_id_datetime_ind on events_events\n(cost=0.00..48886.61 rows=12466 width=93) (actual time=0.037..0.144 rows=50\nloops=1)\n Index Cond: (eventtype_id = 71)\n -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual\ntime=0.001..0.001 rows=1 loops=50)\n -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1\nwidth=38) (actual time=0.024..0.029 rows=1 loops=1)\n Filter: ((id = 71) AND (severity = 20))\n Total runtime: 0.415 ms\n(8 rows)\n\nTime: 1.290 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN\nevents_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\nIN (71)) ORDER BY datetime DESC limit 50;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.23..200.31 rows=50 width=131) (actual\ntime=11641.775..11641.775 rows=0 loops=1)\n -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual\ntime=11641.773..11641.773 rows=0 loops=1)\n -> Index Scan Backward using\nevents_events_eventtype_id_datetime_ind on events_events\n(cost=0.00..48886.61 rows=12466 width=93) (actual time=0.035..11573.320\nrows=50389 loops=1)\n Index Cond: (eventtype_id = 71)\n -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual\ntime=0.000..0.000 rows=0 loops=50389)\n -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1\nwidth=38) (actual time=0.028..0.028 rows=0 loops=1)\n Filter: ((id = 71) AND (severity = 70))\n Total runtime: 11641.839 ms\n(8 rows)\n\nTime: 11642.902 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN\nevents_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\nIN (71, 999)) ORDER BY datetime DESC LIMIT 50;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=27290.26..27290.38 rows=50 width=131) (actual\ntime=0.118..0.118 rows=0 loops=1)\n -> Sort (cost=27290.26..27303.17 rows=5164 width=131) (actual\ntime=0.117..0.117 rows=0 loops=1)\n Sort Key: events_events.datetime\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=22.95..27118.71 rows=5164 width=131)\n(actual time=0.112..0.112 rows=0 loops=1)\n -> Seq Scan on events_event_types (cost=0.00..3.02 rows=17\nwidth=38) (actual time=0.016..0.041 rows=16 loops=1)\n Filter: (severity = 70)\n -> Bitmap Heap Scan on events_events (cost=22.95..1589.94\nrows=408 width=93) (actual time=0.002..0.002 rows=0 loops=16)\n Recheck Cond: ((events_events.eventtype_id = ANY\n('{71,999}'::bigint[])) AND (events_events.eventtype_id =\nevents_event_types.id))\n -> Bitmap Index Scan on\ntest_events_events_eventtype_id_severity_ind (cost=0.00..22.85 rows=408\nwidth=0) (actual time=0.001..0.001 rows=0 loops=16)\n Index Cond: ((events_events.eventtype_id = ANY\n('{71,999}'::bigint[])) AND (events_events.eventtype_id =\nevents_event_types.id))\n Total runtime: 0.179 ms\n(12 rows)\n\nTime: 1.510 ms\n\n\n> I'm\n> wondering though why you do a left outer join. From the \\d output in the\n> previous mail, events_event.eventtype_id has a not null constraint and a\n> fk to events_event_types.id, so an inner join would be appropriate.\n> Outer joins limits the amount of join orders the planner considers, so a\n> better plan might arise when the join is changed to inner.\n\nI do agree with this assessment. I'm sort of improving the performance of\nan existing implementation of ours, for which I'm not aware why they chose\nfor LEFT OUTER. I did, however, test things with INNER as well, with the\nsame results, so I decided to stick with what I encountered in the existing\nimplementation. But it's on my mind as well ;-)\n\n> Thanks - I'm sorry that I was not more specific earlier, but what would \n> be *really* helpful is the output of explain analyze, since that also \n> shows actual time, # rows and # loops of the inner nestloop.\n\nNo problem at all.\n\n\nEXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\nIN (71)) ORDER BY datetime DESC limit 50;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.23..200.31 rows=50 width=131) (actual time=0.070..0.341 rows=50 loops=1)\n -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual time=0.069..0.309 rows=50 loops=1)\n -> Index Scan Backward using events_events_eventtype_id_datetime_ind on events_events (cost=0.00..48886.61 rows=12466 width=93) (actual time=0.037..0.144 rows=50 loops=1)\n Index Cond: (eventtype_id = 71)\n -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual time=0.001..0.001 rows=1 loops=50)\n -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1 width=38) (actual time=0.024..0.029 rows=1 loops=1)\n Filter: ((id = 71) AND (severity = 20))\n Total runtime: 0.415 ms\n(8 rows)\n\nTime: 1.290 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\nIN (71)) ORDER BY datetime DESC limit 50;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.23..200.31 rows=50 width=131) (actual time=11641.775..11641.775 rows=0 loops=1)\n -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual time=11641.773..11641.773 rows=0 loops=1)\n -> Index Scan Backward using events_events_eventtype_id_datetime_ind on events_events (cost=0.00..48886.61 rows=12466 width=93) (actual time=0.035..11573.320 rows=50389 loops=1)\n Index Cond: (eventtype_id = 71)\n -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual time=0.000..0.000 rows=0 loops=50389)\n -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1 width=38) (actual time=0.028..0.028 rows=0 loops=1)\n Filter: ((id = 71) AND (severity = 70))\n Total runtime: 11641.839 ms\n(8 rows)\n\nTime: 11642.902 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\nIN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=27290.26..27290.38 rows=50 width=131) (actual time=0.118..0.118 rows=0 loops=1)\n -> Sort (cost=27290.26..27303.17 rows=5164 width=131) (actual time=0.117..0.117 rows=0 loops=1)\n Sort Key: events_events.datetime\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=22.95..27118.71 rows=5164 width=131) (actual time=0.112..0.112 rows=0 loops=1)\n -> Seq Scan on events_event_types (cost=0.00..3.02 rows=17 width=38) (actual time=0.016..0.041 rows=16 loops=1)\n Filter: (severity = 70)\n -> Bitmap Heap Scan on events_events (cost=22.95..1589.94 rows=408 width=93) (actual time=0.002..0.002 rows=0 loops=16)\n Recheck Cond: ((events_events.eventtype_id = ANY ('{71,999}'::bigint[])) AND (events_events.eventtype_id = events_event_types.id))\n -> Bitmap Index Scan on test_events_events_eventtype_id_severity_ind (cost=0.00..22.85 rows=408 width=0) (actual time=0.001..0.001 rows=0 loops=16)\n Index Cond: ((events_events.eventtype_id = ANY ('{71,999}'::bigint[])) AND (events_events.eventtype_id = events_event_types.id))\n Total runtime: 0.179 ms\n(12 rows)\n\nTime: 1.510 ms\n\n\n> I'm \n> wondering though why you do a left outer join. From the \\d output in the \n> previous mail, events_event.eventtype_id has a not null constraint and a \n> fk to events_event_types.id, so an inner join would be appropriate. \n> Outer joins limits the amount of join orders the planner considers, so a \n> better plan might arise when the join is changed to inner.\n\nI do agree with this assessment. I'm sort of improving the performance of an existing implementation of ours, for which I'm not aware why they chose for LEFT OUTER. I did, however, test things with INNER as well, with the same results, so I decided to stick with what I encountered in the existing implementation. But it's on my mind as well ;-)",
"msg_date": "Wed, 10 Mar 2010 11:35:18 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "[email protected] wrote:\n>\n> > Thanks - I'm sorry that I was not more specific earlier, but what would\n> > be *really* helpful is the output of explain analyze, since that also\n> > shows actual time, # rows and # loops of the inner nestloop.\n>\n> No problem at all.\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN \n> events_event_types ON\n> eventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\n> IN (71)) ORDER BY datetime DESC limit 50;\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=3.23..200.31 rows=50 width=131) (actual \n> time=0.070..0.341 rows=50 loops=1)\n> -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual \n> time=0.069..0.309 rows=50 loops=1)\n> -> Index Scan Backward using \n> events_events_eventtype_id_datetime_ind on events_events \n> (cost=0.00..48886.61 rows=12466 width=93) (actual time=0.037..0.144 \n> rows=50 loops=1)\n> Index Cond: (eventtype_id = 71)\n> -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual \n> time=0.001..0.001 rows=1 loops=50)\n> -> Seq Scan on events_event_types (cost=0.00..3.23 \n> rows=1 width=38) (actual time=0.024..0.029 rows=1 loops=1)\n> Filter: ((id = 71) AND (severity = 20))\n> Total runtime: 0.415 ms\n> (8 rows)\n>\n> Time: 1.290 ms\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN \n> events_event_types ON\n> eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\n> IN (71)) ORDER BY datetime DESC limit 50;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=3.23..200.31 rows=50 width=131) (actual \n> time=11641.775..11641.775 rows=0 loops=1)\n> -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual \n> time=11641.773..11641.773 rows=0 loops=1)\n> -> Index Scan Backward using \n> events_events_eventtype_id_datetime_ind on events_events \n> (cost=0.00..48886.61 rows=12466 width=93) (actual \n> time=0.035..11573.320 rows=50389 loops=1)\n> Index Cond: (eventtype_id = 71)\n> -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual \n> time=0.000..0.000 rows=0 loops=50389)\n> -> Seq Scan on events_event_types (cost=0.00..3.23 \n> rows=1 width=38) (actual time=0.028..0.028 rows=0 loops=1)\n> Filter: ((id = 71) AND (severity = 70))\n> Total runtime: 11641.839 ms\n> (8 rows)\n>\n> Time: 11642.902 ms\n>\n>\n> EXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN \n> events_event_types ON\n> eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\n> IN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=27290.26..27290.38 rows=50 width=131) (actual \n> time=0.118..0.118 rows=0 loops=1)\n> -> Sort (cost=27290.26..27303.17 rows=5164 width=131) (actual \n> time=0.117..0.117 rows=0 loops=1)\n> Sort Key: events_events.datetime\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=22.95..27118.71 rows=5164 width=131) \n> (actual time=0.112..0.112 rows=0 loops=1)\n> -> Seq Scan on events_event_types (cost=0.00..3.02 \n> rows=17 width=38) (actual time=0.016..0.041 rows=16 loops=1)\n> Filter: (severity = 70)\n> -> Bitmap Heap Scan on events_events \n> (cost=22.95..1589.94 rows=408 width=93) (actual time=0.002..0.002 \n> rows=0 loops=16)\n> Recheck Cond: ((events_events.eventtype_id = ANY \n> ('{71,999}'::bigint[])) AND (events_events.eventtype_id = \n> events_event_types.id))\n> -> Bitmap Index Scan on \n> test_events_events_eventtype_id_severity_ind (cost=0.00..22.85 \n> rows=408 width=0) (actual time=0.001..0.001 rows=0 loops=16)\n> Index Cond: ((events_events.eventtype_id = \n> ANY ('{71,999}'::bigint[])) AND (events_events.eventtype_id = \n> events_event_types.id))\n> Total runtime: 0.179 ms\n> (12 rows)\n>\n> Time: 1.510 ms\n>\n>\n> > I'm\n> > wondering though why you do a left outer join. From the \\d output in \n> the\n> > previous mail, events_event.eventtype_id has a not null constraint \n> and a\n> > fk to events_event_types.id, so an inner join would be appropriate.\n> > Outer joins limits the amount of join orders the planner considers, \n> so a\n> > better plan might arise when the join is changed to inner.\n>\n> I do agree with this assessment. I'm sort of improving the performance \n> of an existing implementation of ours, for which I'm not aware why \n> they chose for LEFT OUTER. I did, however, test things with INNER as \n> well, with the same results, so I decided to stick with what I \n> encountered in the existing implementation. But it's on my mind as \n> well ;-)\n>\nThanks for the formatted output. The difference in speed is caused by \nthe first query that has to read 50k rows from an index, with filter \nexpression is only eventtype_id = 71, and the second has the extra \nknowledge from the scan of events_event_type in the nestloops outer \nloop, which returns 0 rows in all cases and is hence a lot faster, even \nthough that scan is called 16 times.\n\nBut the big question is: why does the planner chooses plan 1 in the \nfirst case, or how to fix that? My $0,02 would be to 'help' the planner \nfind a better plan. Ofcourse you did ANALYZE, but maybe another idea is \nto increase the default_statistics_target if it is still at the default \nvalue of 10. More info on \nhttp://www.postgresql.org/docs/8.3/static/runtime-config-query.html. And \nalso to 'help' the planner: I'd just change the query to an inner join \nin this case, since there cannot be null tuples in the right hand side here.\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Wed, 10 Mar 2010 12:04:51 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "In article <OF6136AD9B.D40F3AF5-ONC12576E2.002D5763-C12576E2.002FBD70@imtechrelay.nl>,\[email protected] writes:\n> SELECT * FROM events_events LEFT OUTER JOIN events_event_types ON eventType_id=\n> events_event_types.id WHERE severity=70 AND (eventType_id IN (71)) ORDER BY\n> datetime DESC LIMIT 50;\n> Now I have at least two possibilities:\n> - Implementing the dummy value as shown above in my source code to improve\n> query performance (dirty but effective)\n> - Further investigating what is going on, which at this point is something I\n> need help with\n\nFirst I'd change the query. You build an OUTER JOIN and immediately\nconvert it to an INNER JOIN by referencing the \"severity\" column.\n\nThen show us what EXPLAIN ANALYZE says to the problem.\n\n",
"msg_date": "Wed, 10 Mar 2010 17:29:25 +0100",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "On Wed, Mar 10, 2010 at 6:04 AM, Yeb Havinga <[email protected]> wrote:\n> [email protected] wrote:\n>>\n>> > Thanks - I'm sorry that I was not more specific earlier, but what would\n>> > be *really* helpful is the output of explain analyze, since that also\n>> > shows actual time, # rows and # loops of the inner nestloop.\n>>\n>> No problem at all.\n>>\n>>\n>> EXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN\n>> events_event_types ON\n>> eventType_id=events_event_types.id WHERE severity=20 AND (eventType_id\n>> IN (71)) ORDER BY datetime DESC limit 50;\n>>\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=3.23..200.31 rows=50 width=131) (actual time=0.070..0.341\n>> rows=50 loops=1)\n>> -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual\n>> time=0.069..0.309 rows=50 loops=1)\n>> -> Index Scan Backward using\n>> events_events_eventtype_id_datetime_ind on events_events\n>> (cost=0.00..48886.61 rows=12466 width=93) (actual time=0.037..0.144 rows=50\n>> loops=1)\n>> Index Cond: (eventtype_id = 71)\n>> -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual\n>> time=0.001..0.001 rows=1 loops=50)\n>> -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1\n>> width=38) (actual time=0.024..0.029 rows=1 loops=1)\n>> Filter: ((id = 71) AND (severity = 20))\n>> Total runtime: 0.415 ms\n>> (8 rows)\n>>\n>> Time: 1.290 ms\n>>\n>>\n>> EXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN\n>> events_event_types ON\n>> eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\n>> IN (71)) ORDER BY datetime DESC limit 50;\n>>\n>> QUERY PLAN\n>>\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=3.23..200.31 rows=50 width=131) (actual\n>> time=11641.775..11641.775 rows=0 loops=1)\n>> -> Nested Loop (cost=3.23..49139.16 rows=12466 width=131) (actual\n>> time=11641.773..11641.773 rows=0 loops=1)\n>> -> Index Scan Backward using\n>> events_events_eventtype_id_datetime_ind on events_events\n>> (cost=0.00..48886.61 rows=12466 width=93) (actual time=0.035..11573.320\n>> rows=50389 loops=1)\n>> Index Cond: (eventtype_id = 71)\n>> -> Materialize (cost=3.23..3.24 rows=1 width=38) (actual\n>> time=0.000..0.000 rows=0 loops=50389)\n>> -> Seq Scan on events_event_types (cost=0.00..3.23 rows=1\n>> width=38) (actual time=0.028..0.028 rows=0 loops=1)\n>> Filter: ((id = 71) AND (severity = 70))\n>> Total runtime: 11641.839 ms\n>> (8 rows)\n>>\n>> Time: 11642.902 ms\n>>\n>>\n>> EXPLAIN ANALYZE SELECT * FROM events_events LEFT OUTER JOIN\n>> events_event_types ON\n>> eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id\n>> IN (71, 999)) ORDER BY datetime DESC LIMIT 50;\n>>\n>> QUERY PLAN\n>>\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=27290.26..27290.38 rows=50 width=131) (actual\n>> time=0.118..0.118 rows=0 loops=1)\n>> -> Sort (cost=27290.26..27303.17 rows=5164 width=131) (actual\n>> time=0.117..0.117 rows=0 loops=1)\n>> Sort Key: events_events.datetime\n>> Sort Method: quicksort Memory: 17kB\n>> -> Nested Loop (cost=22.95..27118.71 rows=5164 width=131)\n>> (actual time=0.112..0.112 rows=0 loops=1)\n>> -> Seq Scan on events_event_types (cost=0.00..3.02 rows=17\n>> width=38) (actual time=0.016..0.041 rows=16 loops=1)\n>> Filter: (severity = 70)\n>> -> Bitmap Heap Scan on events_events (cost=22.95..1589.94\n>> rows=408 width=93) (actual time=0.002..0.002 rows=0 loops=16)\n>> Recheck Cond: ((events_events.eventtype_id = ANY\n>> ('{71,999}'::bigint[])) AND (events_events.eventtype_id =\n>> events_event_types.id))\n>> -> Bitmap Index Scan on\n>> test_events_events_eventtype_id_severity_ind (cost=0.00..22.85 rows=408\n>> width=0) (actual time=0.001..0.001 rows=0 loops=16)\n>> Index Cond: ((events_events.eventtype_id = ANY\n>> ('{71,999}'::bigint[])) AND (events_events.eventtype_id =\n>> events_event_types.id))\n>> Total runtime: 0.179 ms\n>> (12 rows)\n>>\n>> Time: 1.510 ms\n>>\n>>\n>> > I'm\n>> > wondering though why you do a left outer join. From the \\d output in the\n>> > previous mail, events_event.eventtype_id has a not null constraint and a\n>> > fk to events_event_types.id, so an inner join would be appropriate.\n>> > Outer joins limits the amount of join orders the planner considers, so a\n>> > better plan might arise when the join is changed to inner.\n>>\n>> I do agree with this assessment. I'm sort of improving the performance of\n>> an existing implementation of ours, for which I'm not aware why they chose\n>> for LEFT OUTER. I did, however, test things with INNER as well, with the\n>> same results, so I decided to stick with what I encountered in the existing\n>> implementation. But it's on my mind as well ;-)\n>>\n> Thanks for the formatted output. The difference in speed is caused by the\n> first query that has to read 50k rows from an index, with filter expression\n> is only eventtype_id = 71, and the second has the extra knowledge from the\n> scan of events_event_type in the nestloops outer loop, which returns 0 rows\n> in all cases and is hence a lot faster, even though that scan is called 16\n> times.\n>\n> But the big question is: why does the planner chooses plan 1 in the first\n> case, or how to fix that? My $0,02 would be to 'help' the planner find a\n> better plan. Ofcourse you did ANALYZE, but maybe another idea is to increase\n> the default_statistics_target if it is still at the default value of 10.\n> More info on\n> http://www.postgresql.org/docs/8.3/static/runtime-config-query.html. And\n> also to 'help' the planner: I'd just change the query to an inner join in\n> this case, since there cannot be null tuples in the right hand side here.\n\nI believe that the planner already figured that part out - it is\nimplemented a Nested Loop, rather than a Nested Loop Left Join.\n\nI think that the planner thinks that the backward index scan plan will\nbe faster because it expects not to have to scan very many rows before\nit finds enough rows that match the join condition to fulfill the\nlimit. It actually ends up scanning the entire index range one tuple\nat a time, which is slow, and backwards, which is slower. But it's\nalso possible to have mistakes in the opposite direction, where the\nplanner thinks it will be faster to bitmap-index-scan the index but it\nturns out that since only a small number of rows are needed a regular\nindex scan (forward or backward) would have been better.\n\nIt does seem like once the materialize step is done we could notice\nthat the tuplestore is empty and, given that uses no outer variables\nor parameters and therefore will never be re-executed, we could skip\nthe rest of the index scan. That doesn't really fix the problem in\ngeneral because you could have one row in the tuplestore and need to\niterate through the entire index to find out that nothing matches, but\nit might still be worth doing.\n\n...Robert\n",
"msg_date": "Wed, 10 Mar 2010 16:55:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> It does seem like once the materialize step is done we could notice\n> that the tuplestore is empty and, given that uses no outer variables\n> or parameters and therefore will never be re-executed, we could skip\n> the rest of the index scan.\n\nYeah, the same thing occurred to me while looking at this example.\n\nRight now, nodeNestloop is not really aware of whether the inner scan\ndepends on any parameters from the outer scan, so it's a bit hard to\ndetermine whether the join can be abandoned. However, I have 9.1\nplans to change that --- I'd like to get rid of the current\npass-the-outer-tuple-to-ReScan hack entirely, in favor of having\nnodeNestloop explicitly set PARAM_EXEC parameters for the inner scan.\nOnce that's in, it would be pretty easy to apply this optimization.\n(I've added a note to my private TODO file about it.)\n\nAnother possible objection is that if the inner scan has any volatile\nfunctions in its quals, it might yield a different result upon rescan\neven without parameters. However, since we are already willing to stick\na Materialize node on it at the whim of the planner, I don't see how\nsuch a short-circuit in the executor would make things any worse.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Mar 2010 17:37:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query "
},
{
"msg_contents": "On Wed, Mar 10, 2010 at 5:37 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> It does seem like once the materialize step is done we could notice\n>> that the tuplestore is empty and, given that uses no outer variables\n>> or parameters and therefore will never be re-executed, we could skip\n>> the rest of the index scan.\n>\n> Yeah, the same thing occurred to me while looking at this example.\n>\n> Right now, nodeNestloop is not really aware of whether the inner scan\n> depends on any parameters from the outer scan, so it's a bit hard to\n> determine whether the join can be abandoned. However, I have 9.1\n> plans to change that --- I'd like to get rid of the current\n> pass-the-outer-tuple-to-ReScan hack entirely, in favor of having\n> nodeNestloop explicitly set PARAM_EXEC parameters for the inner scan.\n> Once that's in, it would be pretty easy to apply this optimization.\n> (I've added a note to my private TODO file about it.)\n\nOh, cool. I was thinking about working on that exact project (getting\nrid of the outer tuple stuff) per some of our earlier conversations,\nbut I don't understand the code well enough so it is likely to be\nexceedingly slow going if I have to do it.\n\n> Another possible objection is that if the inner scan has any volatile\n> functions in its quals, it might yield a different result upon rescan\n> even without parameters. However, since we are already willing to stick\n> a Materialize node on it at the whim of the planner, I don't see how\n> such a short-circuit in the executor would make things any worse.\n\n+1.\n\n...Robert\n",
"msg_date": "Wed, 10 Mar 2010 17:41:35 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote on 10-03-2010 23:37:20:\n\n> Tom Lane <[email protected]>\n> 10-03-2010 23:37\n>\n> Right now, nodeNestloop is not really aware of whether the inner scan\n> depends on any parameters from the outer scan, so it's a bit hard to\n> determine whether the join can be abandoned. However, I have 9.1\n> plans to change that --- I'd like to get rid of the current\n> pass-the-outer-tuple-to-ReScan hack entirely, in favor of having\n> nodeNestloop explicitly set PARAM_EXEC parameters for the inner scan.\n> Once that's in, it would be pretty easy to apply this optimization.\n\nI've made it my personal standard to sit mailing list discussions out to\nthe end, report back on results, etc. This one has been sitting in my Inbox\nfor a couple of months now, waiting for me to respond. But the truth is:\nyou're going way over my head. And that is fine, you seem to know of the\ninner workings of PostgreSQL, and I'll be waiting for future versions that\nmay well solve my problem -- or not, it being a great product either way.\nKeep up the good work, Sander.\n\nTom Lane <[email protected]> wrote on 10-03-2010 23:37:20:\n\n> Tom Lane <[email protected]> \n> 10-03-2010 23:37\n> \n> Right now, nodeNestloop is not really aware of whether the inner scan\n> depends on any parameters from the outer scan, so it's a bit hard to\n> determine whether the join can be abandoned. However, I have 9.1\n> plans to change that --- I'd like to get rid of the current\n> pass-the-outer-tuple-to-ReScan hack entirely, in favor of having\n> nodeNestloop explicitly set PARAM_EXEC parameters for the inner scan.\n> Once that's in, it would be pretty easy to apply this optimization.\n\nI've made it my personal standard to sit mailing list discussions out to the end, report back on results, etc. This one has been sitting in my Inbox for a couple of months now, waiting for me to respond. But the truth is: you're going way over my head. And that is fine, you seem to know of the inner workings of PostgreSQL, and I'll be waiting for future versions that may well solve my problem -- or not, it being a great product either way.\nKeep up the good work, Sander.",
"msg_date": "Sat, 29 May 2010 23:13:40 +0200",
"msg_from": "Sander Verhagen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange workaround for slow query"
}
] |
[
{
"msg_contents": "Hi group,\n\n\nWe have two related tables with event types and events. We query for a join\nbetween these two tables and experience that, when there is an\nto-be-expected very small result set, this query performs particularly\npoor. Understanding in this matter would be appreciated.\n\nSELECT * from events_event_types WHERE id IN (71,999);\n id | name | severity\n----+------------------------+\n----------\n 71 | Xenteo Payment handled | 20\n(1 row)\n\n\nFollowing original query returns zero rows (as to be expected on what I\nshowed above) and takes (relatively) a lot of time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 397.564 ms\n\nFollowing query is much alike the original query, but I changed the \"WHERE\nseverity\". It returns the number of rows are requested in LIMIT and takes\nonly little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id IN\n(71)) ORDER BY datetime DESC limit 50;\n...\n(50 rows)\nTime: 1.604 ms\n\nThe latter much to prove that this is a problem related to small result\nsets.\n\nFollowing query is much alike the original query, although I've added a\ndummy value (non-existent in event types table; \"999\") to the WHERE IN\nclause. It returns the same zero rows and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71, 999)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 1.340 ms\n\nNow I have at least two possibilities:\n- Implementing the dummy value as shown above in my source code to improve\nquery performance (dirty but effective)\n- Further investigating what is going on, which at this point is something\nI need help with\nThanks for your assistance in this matter!\n\n\nFollowing are a number of details to describe the environment that this is\nseen in.\n\nSELECT version();\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n(Ubuntu 4.2.4-1ubuntu3)\n\nPostgres was installed as Debian package in Ubuntu 8.04 LTS.\n\nSELECT count(*) FROM events_events;\n7619991\nSELECT count(*) FROM events_events WHERE eventtype_id=71;\n50348\nSELECT count(*) FROM events_event_types;\n82\n\n\\d events_event_types\n Table \"public.events_event_types\"\n Column | Type | Modifiers\n----------+------------------------+-----------------------------------------------------------------\n id | bigint | not null default nextval\n('events_event_types_id_seq'::regclass)\n name | character varying(255) | not null\n severity | bigint | not null\nIndexes:\n \"events_event_types_pkey\" PRIMARY KEY, btree (id)\n \"events_event_types_name_key\" UNIQUE, btree (name)\n \"events_event_types_severity_ind\" btree (severity)\n \"test_events_eventtypes_id_severity_ind\" btree (id, severity)\n \"test_events_eventtypes_severity_id_ind\" btree (severity, id)\n\n\\d events_events\n Table \"public.events_events\"\n Column | Type |\nModifiers\n----------------------+--------------------------+------------------------------------------------------------\n id | bigint | not null default nextval\n('events_events_id_seq'::regclass)\n carparkid | bigint |\n cleared | boolean | not null\n datetime | timestamp with time zone |\n identity | character varying(255) |\n generatedbystationid | bigint |\n eventtype_id | bigint | not null\n relatedstationid | bigint |\n processingstatus | character varying(255) | not null\nIndexes:\n \"events_events_pkey\" PRIMARY KEY, btree (id)\n \"events_events_cleared_ind\" btree (cleared)\n \"events_events_datetime_eventtype_id_ind\" btree (datetime,\neventtype_id)\n \"events_events_datetime_ind\" btree (datetime)\n \"events_events_eventtype_id_datetime_ind\" btree (eventtype_id,\ndatetime)\n \"events_events_eventtype_id_ind\" btree (eventtype_id)\n \"events_events_identity_ind\" btree (identity)\n \"events_events_not_cleared_ind\" btree (cleared) WHERE NOT cleared\n \"events_events_processingstatus_new\" btree (processingstatus) WHERE\nprocessingstatus::text = 'NEW'::text\n \"test2_events_events_eventtype_id_severity_ind\" btree (datetime,\neventtype_id, cleared)\n \"test3_events_events_eventtype_id_severity_ind\" btree (cleared,\ndatetime, eventtype_id)\n \"test4_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared, eventtype_id)\n \"test5_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared)\n \"test_events_events_eventtype_id_severity_ind\" btree (eventtype_id,\ncleared)\nForeign-key constraints:\n \"fk88fe3effa0559276\" FOREIGN KEY (eventtype_id) REFERENCES\nevents_event_types(id)\n\nCan someone explain this behaviour?\n\nThanks in advance!\n\nBest regards,\n\n-- \nSquins | IT, Honestly\nOranjestraat 23\n2983 HL Ridderkerk\nThe Netherlands\nPhone: +31 (0)180 414520\nMobile: +31 (0)6 30413841\nwww.squins.com\nhttp://twitter.com/keesvandieren\nChamber of commerce Rotterdam: 22048547\n\n\nHi group,\n\n\nWe have two related tables with event types and events. We query for a join\nbetween these two tables and experience that, when there is an\nto-be-expected very small result set, this query performs particularly\npoor. Understanding in this matter would be appreciated.\n\nSELECT * from events_event_types WHERE id IN (71,999);\n id | name | severity\n----+------------------------+----------\n 71 | Xenteo Payment handled | 20\n(1 row)\n\n\nFollowing original query returns zero rows (as to be expected on what I\nshowed above) and takes (relatively) a lot of time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 397.564 ms\n\nFollowing query is much alike the original query, but I changed the \"WHERE\nseverity\". It returns the number of rows are requested in LIMIT and takes\nonly little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id IN\n(71)) ORDER BY datetime DESC limit 50;\n...\n(50 rows)\nTime: 1.604 ms\n\nThe latter much to prove that this is a problem related to small result\nsets.\n\nFollowing query is much alike the original query, although I've added a\ndummy value (non-existent in event types table; \"999\") to the WHERE IN\nclause. It returns the same zero rows and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71, 999)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 1.340 ms\n\nNow I have at least two possibilities:\n- Implementing the dummy value as shown above in my source code to improve\nquery performance (dirty but effective)\n- Further investigating what is going on, which at this point is something\nI need help with\nThanks for your assistance in this matter!\n\n\nFollowing are a number of details to describe the environment that this is\nseen in.\n\nSELECT version();\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n(Ubuntu 4.2.4-1ubuntu3)\n\nPostgres was installed as Debian package in Ubuntu 8.04 LTS.\n\nSELECT count(*) FROM events_events;\n7619991\nSELECT count(*) FROM events_events WHERE eventtype_id=71;\n50348\nSELECT count(*) FROM events_event_types;\n82\n\n\\d events_event_types\n Table \"public.events_event_types\"\n Column | Type | Modifiers\n----------+------------------------+-----------------------------------------------------------------\n id | bigint | not null default nextval\n('events_event_types_id_seq'::regclass)\n name | character varying(255) | not null\n severity | bigint | not null\nIndexes:\n \"events_event_types_pkey\" PRIMARY KEY, btree (id)\n \"events_event_types_name_key\" UNIQUE, btree (name)\n \"events_event_types_severity_ind\" btree (severity)\n \"test_events_eventtypes_id_severity_ind\" btree (id, severity)\n \"test_events_eventtypes_severity_id_ind\" btree (severity, id)\n\n\\d events_events\n Table \"public.events_events\"\n Column | Type |\nModifiers\n----------------------+--------------------------+------------------------------------------------------------\n id | bigint | not null default nextval\n('events_events_id_seq'::regclass)\n carparkid | bigint |\n cleared | boolean | not null\n datetime | timestamp with time zone |\n identity | character varying(255) |\n generatedbystationid | bigint |\n eventtype_id | bigint | not null\n relatedstationid | bigint |\n processingstatus | character varying(255) | not null\nIndexes:\n \"events_events_pkey\" PRIMARY KEY, btree (id)\n \"events_events_cleared_ind\" btree (cleared)\n \"events_events_datetime_eventtype_id_ind\" btree (datetime,\neventtype_id)\n \"events_events_datetime_ind\" btree (datetime)\n \"events_events_eventtype_id_datetime_ind\" btree (eventtype_id,\ndatetime)\n \"events_events_eventtype_id_ind\" btree (eventtype_id)\n \"events_events_identity_ind\" btree (identity)\n \"events_events_not_cleared_ind\" btree (cleared) WHERE NOT cleared\n \"events_events_processingstatus_new\" btree (processingstatus) WHERE\nprocessingstatus::text = 'NEW'::text\n \"test2_events_events_eventtype_id_severity_ind\" btree (datetime,\neventtype_id, cleared)\n \"test3_events_events_eventtype_id_severity_ind\" btree (cleared,\ndatetime, eventtype_id)\n \"test4_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared, eventtype_id)\n \"test5_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared)\n \"test_events_events_eventtype_id_severity_ind\" btree (eventtype_id,\ncleared)\nForeign-key constraints:\n \"fk88fe3effa0559276\" FOREIGN KEY (eventtype_id) REFERENCES\nevents_event_types(id)Can someone explain this behaviour?Thanks in advance!Best regards,-- Squins | IT, HonestlyOranjestraat 232983 HL RidderkerkThe Netherlands\nPhone: +31 (0)180 414520Mobile: +31 (0)6 30413841www.squins.comhttp://twitter.com/keesvandierenChamber of commerce Rotterdam: 22048547",
"msg_date": "Wed, 10 Mar 2010 10:05:47 +0100",
"msg_from": "Kees van Dieren <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange workaround for slow query"
}
] |
[
{
"msg_contents": "Hi folks,\n\n\nWe have two related tables with event types and events. We query for a join\nbetween these two tables and experience that, when there is an\nto-be-expected very small result set, this query performs particularly\npoor. Understanding in this matter would be appreciated.\n\nSELECT * from events_event_types WHERE id IN (71,999);\n id | name | severity\n----+------------------------+----------\n 71 | Xenteo Payment handled | 20\n(1 row)\n\n\nFollowing original query returns zero rows (as to be expected on what I\nshowed above) and takes (relatively) a lot of time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+---\n-------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 397.564 ms\n\nFollowing query is much alike the original query, but I changed the \"WHERE\nseverity\". It returns the number of rows are requested in LIMIT and takes\nonly little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id IN\n(71)) ORDER BY datetime DESC limit 50;\n...\n(50 rows)\nTime: 1.604 ms\n\nThe latter much to prove that this is a problem related to small result\nsets.\n\nFollowing query is much alike the original query, although I've added a\ndummy value (non-existent in event types table; \"999\") to the WHERE IN\nclause. It returns the same zero rows and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71, 999)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 1.340 ms\n\nNow I have at least two possibilities:\n- Implementing the dummy value as shown above in my source code to improve\nquery performance (dirty but effective)\n- Further investigating what is going on, which at this point is something\nI need help with\nThanks for your assistance in this matter!\n\n\nFollowing are a number of details to describe the environment that this is\nseen in.\n\nSELECT version();\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n(Ubuntu 4.2.4-1ubuntu3)\n\nPostgres was installed as Debian package in Ubuntu 8.04 LTS.\n\nSELECT count(*) FROM events_events;\n7619991\nSELECT count(*) FROM events_events WHERE eventtype_id=71;\n50348\nSELECT count(*) FROM events_event_types;\n82\n\n\\d events_event_types\n Table \"public.events_event_types\"\n Column | Type | Modifiers\n----------+------------------------+-----------------------------------------------------------------\n id | bigint | not null default nextval\n('events_event_types_id_seq'::regclass)\n name | character varying(255) | not null\n severity | bigint | not null\nIndexes:\n \"events_event_types_pkey\" PRIMARY KEY, btree (id)\n \"events_event_types_name_key\" UNIQUE, btree (name)\n \"events_event_types_severity_ind\" btree (severity)\n \"test_events_eventtypes_id_severity_ind\" btree (id, severity)\n \"test_events_eventtypes_severity_id_ind\" btree (severity, id)\n\n\\d events_events\n Table \"public.events_events\"\n Column | Type |\nModifiers\n----------------------+--------------------------+------------------------------------------------------------\n id | bigint | not null default nextval\n('events_events_id_seq'::regclass)\n carparkid | bigint |\n cleared | boolean | not null\n datetime | timestamp with time zone |\n identity | character varying(255) |\n generatedbystationid | bigint |\n eventtype_id | bigint | not null\n relatedstationid | bigint |\n processingstatus | character varying(255) | not null\nIndexes:\n \"events_events_pkey\" PRIMARY KEY, btree (id)\n \"events_events_cleared_ind\" btree (cleared)\n \"events_events_datetime_eventtype_id_ind\" btree (datetime,\neventtype_id)\n \"events_events_datetime_ind\" btree (datetime)\n \"events_events_eventtype_id_datetime_ind\" btree (eventtype_id,\ndatetime)\n \"events_events_eventtype_id_ind\" btree (eventtype_id)\n \"events_events_identity_ind\" btree (identity)\n \"events_events_not_cleared_ind\" btree (cleared) WHERE NOT cleared\n \"events_events_processingstatus_new\" btree (processingstatus) WHERE\nprocessingstatus::text = 'NEW'::text\n \"test2_events_events_eventtype_id_severity_ind\" btree (datetime,\neventtype_id, cleared)\n \"test3_events_events_eventtype_id_severity_ind\" btree (cleared,\ndatetime, eventtype_id)\n \"test4_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared, eventtype_id)\n \"test5_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared)\n \"test_events_events_eventtype_id_severity_ind\" btree (eventtype_id,\ncleared)\nForeign-key constraints:\n \"fk88fe3effa0559276\" FOREIGN KEY (eventtype_id) REFERENCES\nevents_event_types(id)\n\nCan someone explain this behaviour?\n\nThanks in advance!\n\nBest regards,\n\n\n-- \nSquins | IT, Honestly\nOranjestraat 23\n2983 HL Ridderkerk\nThe Netherlands\nPhone: +31 (0)180 414520\nMobile: +31 (0)6 30413841\nwww.squins.com\nhttp://twitter.com/keesvandieren\nChamber of commerce Rotterdam: 22048547\n\nHi folks,\n\n\nWe have two related tables with event types and events. We query for a join\nbetween these two tables and experience that, when there is an\nto-be-expected very small result set, this query performs particularly\npoor. Understanding in this matter would be appreciated.\n\nSELECT * from events_event_types WHERE id IN (71,999);\n id | name | severity\n----+------------------------+----------\n 71 | Xenteo Payment handled | 20\n(1 row)\n\n\nFollowing original query returns zero rows (as to be expected on what I\nshowed above) and takes (relatively) a lot of time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 397.564 ms\n\nFollowing query is much alike the original query, but I changed the \"WHERE\nseverity\". It returns the number of rows are requested in LIMIT and takes\nonly little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=20 AND (eventType_id IN\n(71)) ORDER BY datetime DESC limit 50;\n...\n(50 rows)\nTime: 1.604 ms\n\nThe latter much to prove that this is a problem related to small result\nsets.\n\nFollowing query is much alike the original query, although I've added a\ndummy value (non-existent in event types table; \"999\") to the WHERE IN\nclause. It returns the same zero rows and takes only little time doing so:\n\nSELECT * FROM events_events LEFT OUTER JOIN events_event_types ON\neventType_id=events_event_types.id WHERE severity=70 AND (eventType_id IN\n(71, 999)) ORDER BY datetime DESC LIMIT 50;\n id | carparkid | cleared | datetime | identity | generatedbystationid |\neventtype_id | relatedstationid | processingstatus | id | name | severity\n----+-----------+---------+----------+----------+----------------------+--------------+------------------+------------------+----+------+----------\n(0 rows)\nTime: 1.340 ms\n\nNow I have at least two possibilities:\n- Implementing the dummy value as shown above in my source code to improve\nquery performance (dirty but effective)\n- Further investigating what is going on, which at this point is something\nI need help with\nThanks for your assistance in this matter!\n\n\nFollowing are a number of details to describe the environment that this is\nseen in.\n\nSELECT version();\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n(Ubuntu 4.2.4-1ubuntu3)\n\nPostgres was installed as Debian package in Ubuntu 8.04 LTS.\n\nSELECT count(*) FROM events_events;\n7619991\nSELECT count(*) FROM events_events WHERE eventtype_id=71;\n50348\nSELECT count(*) FROM events_event_types;\n82\n\n\\d events_event_types\n Table \"public.events_event_types\"\n Column | Type | Modifiers\n----------+------------------------+-----------------------------------------------------------------\n id | bigint | not null default nextval\n('events_event_types_id_seq'::regclass)\n name | character varying(255) | not null\n severity | bigint | not null\nIndexes:\n \"events_event_types_pkey\" PRIMARY KEY, btree (id)\n \"events_event_types_name_key\" UNIQUE, btree (name)\n \"events_event_types_severity_ind\" btree (severity)\n \"test_events_eventtypes_id_severity_ind\" btree (id, severity)\n \"test_events_eventtypes_severity_id_ind\" btree (severity, id)\n\n\\d events_events\n Table \"public.events_events\"\n Column | Type |\nModifiers\n----------------------+--------------------------+------------------------------------------------------------\n id | bigint | not null default nextval\n('events_events_id_seq'::regclass)\n carparkid | bigint |\n cleared | boolean | not null\n datetime | timestamp with time zone |\n identity | character varying(255) |\n generatedbystationid | bigint |\n eventtype_id | bigint | not null\n relatedstationid | bigint |\n processingstatus | character varying(255) | not null\nIndexes:\n \"events_events_pkey\" PRIMARY KEY, btree (id)\n \"events_events_cleared_ind\" btree (cleared)\n \"events_events_datetime_eventtype_id_ind\" btree (datetime,\neventtype_id)\n \"events_events_datetime_ind\" btree (datetime)\n \"events_events_eventtype_id_datetime_ind\" btree (eventtype_id,\ndatetime)\n \"events_events_eventtype_id_ind\" btree (eventtype_id)\n \"events_events_identity_ind\" btree (identity)\n \"events_events_not_cleared_ind\" btree (cleared) WHERE NOT cleared\n \"events_events_processingstatus_new\" btree (processingstatus) WHERE\nprocessingstatus::text = 'NEW'::text\n \"test2_events_events_eventtype_id_severity_ind\" btree (datetime,\neventtype_id, cleared)\n \"test3_events_events_eventtype_id_severity_ind\" btree (cleared,\ndatetime, eventtype_id)\n \"test4_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared, eventtype_id)\n \"test5_events_events_eventtype_id_severity_ind\" btree (datetime,\ncleared)\n \"test_events_events_eventtype_id_severity_ind\" btree (eventtype_id,\ncleared)\nForeign-key constraints:\n \"fk88fe3effa0559276\" FOREIGN KEY (eventtype_id) REFERENCES\nevents_event_types(id)Can someone explain this behaviour?Thanks in advance!Best regards,-- Squins | IT, HonestlyOranjestraat 232983 HL RidderkerkThe Netherlands\nPhone: +31 (0)180 414520Mobile: +31 (0)6 30413841www.squins.comhttp://twitter.com/keesvandierenChamber of commerce Rotterdam: 22048547",
"msg_date": "Wed, 10 Mar 2010 10:07:41 +0100",
"msg_from": "Kees van Dieren <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange workaround for slow query"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am trying to understand why inside an EXISTS clause the query planner \n does not use the index:\n\nEXPLAIN ANALYZE SELECT 1 WHERE EXISTS (SELECT 1 FROM read_acls_cache \n WHERE users_md5 = '9bc9012eb29c0bb2ae3cc7b5e78c2acf');\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------\n Result (cost=1.19..1.20 rows=1 width=0) (actual time=466.317..466.318 \nrows=1 loops=1)\n One-Time Filter: $0\n InitPlan 1 (returns $0)\n -> Seq Scan on read_acls_cache (cost=0.00..62637.01 rows=52517 \nwidth=0) (actual time=466.309..466.309 rows=1 loops=1)\n Filter: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n Total runtime: 466.369 ms\n(6 rows)\n\nWhile it does use the index when executing only the subquery:\n\nEXPLAIN ANALYZE SELECT 1 FROM read_acls_cache WHERE users_md5 = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf';\n QUERY PLAN \n\n--------------------------------------------------------------------------\n Bitmap Heap Scan on read_acls_cache (cost=2176.10..35022.98 \nrows=52517 width=0) (actual time=9.065..21.988 rows=51446 loops=1)\n Recheck Cond: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n -> Bitmap Index Scan on read_acls_cache_users_md5_idx \n(cost=0.00..2162.97 rows=52517 width=0) (actual time=8.900..8.900 \nrows=51446 loops=1)\n Index Cond: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n Total runtime: 25.464 ms\n(5 rows)\n\nThe table has been vacuumed, analyzed and reindexed.\n\nThanks for your support.\n\nRegards\n\nben\n\nHere are some more info :\n\n\\d read_acls_cache\n Table \"public.read_acls_cache\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n users_md5 | character varying(34) | not null\n acl_id | character varying(34) |\nIndexes:\n \"read_acls_cache_users_md5_idx\" btree (users_md5)\n\n\nSELECT COUNT(*) FROM read_acls_cache;\n count\n---------\n 2520899\n(1 row)\n\n\nSELECT COUNT(DISTINCT(users_md5)) FROM read_acls_cache ;\n count\n-------\n 49\n(1 row)\n\n\nSELECT Version();\n version\n------------------------------------------------------------------\n PostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real \n(GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64\n(1 row)\n\n\n",
"msg_date": "Wed, 10 Mar 2010 14:26:20 +0100",
"msg_from": "Benoit Delbosc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plan inside EXISTS clause"
},
{
"msg_contents": "try JOINs...\n\ntry JOINs...",
"msg_date": "Wed, 10 Mar 2010 13:43:41 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan inside EXISTS clause"
},
{
"msg_contents": "EXISTS matches NULLs too and since they are not indexed a\nsequential scan is needed to check for them. Try using\nIN instead.\n\nCheers,\nKen\n\nOn Wed, Mar 10, 2010 at 02:26:20PM +0100, Benoit Delbosc wrote:\n> Hi all,\n>\n> I am trying to understand why inside an EXISTS clause the query planner \n> does not use the index:\n>\n> EXPLAIN ANALYZE SELECT 1 WHERE EXISTS (SELECT 1 FROM read_acls_cache \n> WHERE users_md5 = '9bc9012eb29c0bb2ae3cc7b5e78c2acf');\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------\n> Result (cost=1.19..1.20 rows=1 width=0) (actual time=466.317..466.318 \n> rows=1 loops=1)\n> One-Time Filter: $0\n> InitPlan 1 (returns $0)\n> -> Seq Scan on read_acls_cache (cost=0.00..62637.01 rows=52517 \n> width=0) (actual time=466.309..466.309 rows=1 loops=1)\n> Filter: ((users_md5)::text = \n> '9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n> Total runtime: 466.369 ms\n> (6 rows)\n>\n> While it does use the index when executing only the subquery:\n>\n> EXPLAIN ANALYZE SELECT 1 FROM read_acls_cache WHERE users_md5 = \n> '9bc9012eb29c0bb2ae3cc7b5e78c2acf';\n> QUERY PLAN \n> --------------------------------------------------------------------------\n> Bitmap Heap Scan on read_acls_cache (cost=2176.10..35022.98 rows=52517 \n> width=0) (actual time=9.065..21.988 rows=51446 loops=1)\n> Recheck Cond: ((users_md5)::text = \n> '9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n> -> Bitmap Index Scan on read_acls_cache_users_md5_idx \n> (cost=0.00..2162.97 rows=52517 width=0) (actual time=8.900..8.900 \n> rows=51446 loops=1)\n> Index Cond: ((users_md5)::text = \n> '9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n> Total runtime: 25.464 ms\n> (5 rows)\n>\n> The table has been vacuumed, analyzed and reindexed.\n>\n> Thanks for your support.\n>\n> Regards\n>\n> ben\n>\n> Here are some more info :\n>\n> \\d read_acls_cache\n> Table \"public.read_acls_cache\"\n> Column | Type | Modifiers\n> -----------+-----------------------+-----------\n> users_md5 | character varying(34) | not null\n> acl_id | character varying(34) |\n> Indexes:\n> \"read_acls_cache_users_md5_idx\" btree (users_md5)\n>\n>\n> SELECT COUNT(*) FROM read_acls_cache;\n> count\n> ---------\n> 2520899\n> (1 row)\n>\n>\n> SELECT COUNT(DISTINCT(users_md5)) FROM read_acls_cache ;\n> count\n> -------\n> 49\n> (1 row)\n>\n>\n> SELECT Version();\n> version\n> ------------------------------------------------------------------\n> PostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real \n> (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64\n> (1 row)\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 10 Mar 2010 07:43:42 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan inside EXISTS clause"
},
{
"msg_contents": "Kenneth Marshall wrote:\n> EXISTS matches NULLs too and since they are not indexed a\n> sequential scan is needed to check for them. Try using\n> IN instead.\n> \nThis is nonsense in more than one way.\n\nregards\nYeb Havinga\n\n",
"msg_date": "Wed, 10 Mar 2010 14:51:04 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan inside EXISTS clause"
},
{
"msg_contents": "Yeb Havinga wrote:\n> Kenneth Marshall wrote:\n>> EXISTS matches NULLs too and since they are not indexed a\n>> sequential scan is needed to check for them. Try using\n>> IN instead.\n>> \n> This is nonsense in more than one way.\nHit ctrl-return a bit too slow - exists does not match null but a set of \nrecords, that is either empty or not empty. Also it is possible to index \ntable columns with nulls, and then the indexes can still be used. \nBesides filtering record sets with expressions, indexes are also used \nfor ordering. There the effect of indexes with nulls can be seen: where \nto put them: in front or after the non nulls? So indexes can be \nperfectly used in conjunction with nulls. I found the original mail \nrather intriguing and played with an example myself a bit, but could not \nrepeat the behavior (9.0 devel version), in my case the exists used an \nindex. Maybe it has something to do with the fact that the planner \nestimates to return 50000 rows, even when the actual numbers list only 1 \nhit. In the exists case, it can stop at the first hit. In the select all \nrows case, it must return all rows. Maybe a better plan emerges with \nbetter statistics?\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Wed, 10 Mar 2010 15:05:20 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan inside EXISTS clause"
},
{
"msg_contents": "Yeb Havinga a �crit :\n> Yeb Havinga wrote:\n>> Kenneth Marshall wrote:\n>>> EXISTS matches NULLs too and since they are not indexed a\n>>> sequential scan is needed to check for them. Try using\n>>> IN instead.\n>>> \n>> This is nonsense in more than one way.\n> Hit ctrl-return a bit too slow - exists does not match null but a set of \n> records, that is either empty or not empty. Also it is possible to index \n> table columns with nulls, and then the indexes can still be used. \n> Besides filtering record sets with expressions, indexes are also used \n> for ordering. There the effect of indexes with nulls can be seen: where \n> to put them: in front or after the non nulls? So indexes can be \n> perfectly used in conjunction with nulls. I found the original mail \n> rather intriguing and played with an example myself a bit, but could not \n> repeat the behavior (9.0 devel version), in my case the exists used an \n> index. Maybe it has something to do with the fact that the planner \n> estimates to return 50000 rows, even when the actual numbers list only 1 \n> hit. In the exists case, it can stop at the first hit. In the select all \n> rows case, it must return all rows. Maybe a better plan emerges with \n> better statistics?\n\nThanks for your quick investigation.\n\nChanging the target statistics to 1000 for the column and analyzing the \ntable has not changed the query plan.\n\nI just notice that using a \"LIMIT 1\" also gives a bad query plan\n\nEXPLAIN ANALYZE SELECT 1 FROM read_acls_cache WHERE users_md5 = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf' LIMIT 1;\n QUERY PLAN \n\n---------------------------------------------------------------------------------------\n Limit (cost=0.00..1.19 rows=1 width=0) (actual time=366.771..366.772 \nrows=1 loops=1)\n -> Seq Scan on read_acls_cache (cost=0.00..62637.01 rows=52517 \nwidth=0) (actual time=366.769..366.769 rows=1 loops=1)\n Filter: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n Total runtime: 366.806 ms\n(4 rows)\n\nPerhaps the EXISTS clause is also trying to limit the sub select query.\n\nThere are no NULL value on this column but only 49 distinct values for \n2.5m rows, around 51k rows per value.\n\nWhat I am trying to do is to know if a value is present or not in the \ntable, so far the fastest way is to use a COUNT:\n\nEXPLAIN ANALYZE SELECT COUNT(1) FROM read_acls_cache WHERE users_md5 = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf';\n QUERY PLAN\n---------------------------------------------------------------------------\n Aggregate (cost=35154.28..35154.29 rows=1 width=0) (actual \ntime=20.242..20.242 rows=1 loops=1)\n -> Bitmap Heap Scan on read_acls_cache (cost=2176.10..35022.98 \nrows=52517 width=0) (actual time=6.937..15.025 rows=51446 loops=1)\n Recheck Cond: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n -> Bitmap Index Scan on read_acls_cache_users_md5_idx \n(cost=0.00..2162.97 rows=52517 width=0) (actual time=6.835..6.835 \nrows=51446 loops=1)\n Index Cond: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n Total runtime: 20.295 ms\n(6 rows)\n\nben\n",
"msg_date": "Wed, 10 Mar 2010 16:36:06 +0100",
"msg_from": "Benoit Delbosc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan inside EXISTS clause"
},
{
"msg_contents": "Benoit Delbosc <[email protected]> writes:\n> I am trying to understand why inside an EXISTS clause the query planner \n> does not use the index:\n\nI'm not sure this plan is as bad as all that. The key point is that the\nplanner is expecting 52517 rows that match that users_md5 value (and the\ntrue number is evidently 51446, so that estimate isn't far off). That's\nabout 1/48th of the table. It knows that the EXISTS case can stop as\nsoon as it finds one match, so it's betting that a plain seqscan will\nhit a match faster than an index lookup would be able to, ie,\nseqscanning about 48 tuples is faster than one index lookup. This might\nbe a bad bet if the users_md5 values are correlated with physical order,\nie the matches are not randomly scattered but are all towards the end of\nthe table. Barring that, though, it could be a good bet if the table\nisn't swapped in. Which is what the default cost parameters are set\nup to assume.\n\nI suspect your real complaint is that you expect the table to be swapped\nin, in which case what you ought to be doing is adjusting the planner's\ncost parameters. Some playing around here with a similar case suggests\nthat even a small reduction in random_page_cost would make it prefer an\nindexscan for this type of situation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Mar 2010 10:44:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan inside EXISTS clause "
},
{
"msg_contents": "Tom Lane a �crit :\n> Benoit Delbosc <[email protected]> writes:\n>> I am trying to understand why inside an EXISTS clause the query planner \n>> does not use the index:\n> \n> I'm not sure this plan is as bad as all that. The key point is that the\n> planner is expecting 52517 rows that match that users_md5 value (and the\n> true number is evidently 51446, so that estimate isn't far off). That's\n> about 1/48th of the table. It knows that the EXISTS case can stop as\n> soon as it finds one match, so it's betting that a plain seqscan will\n> hit a match faster than an index lookup would be able to, ie,\n> seqscanning about 48 tuples is faster than one index lookup. This might\n> be a bad bet if the users_md5 values are correlated with physical order,\n> ie the matches are not randomly scattered but are all towards the end of\n> the table. \nexact, the data is not randomly scattered but ordered this explains why \nin my case seq scan is a bad bet\n\n Barring that, though, it could be a good bet if the table\n> isn't swapped in. Which is what the default cost parameters are set\n> up to assume.\nthere are lots of shared buffers and effective memory on this instance, \nthe query is executed many times I can assume that the table isn't \nswapped in right ?\n\n> I suspect your real complaint is that you expect the table to be swapped\n> in, in which case what you ought to be doing is adjusting the planner's\n> cost parameters. Some playing around here with a similar case suggests\n> that even a small reduction in random_page_cost would make it prefer an\n> indexscan for this type of situation.\nexcellent !\n\nChanging the random_page_cost from 4 to 2 do the trick\n\nSET random_page_cost = 2;\nEXPLAIN ANALYZE SELECT 1 WHERE EXISTS (SELECT 1 FROM read_acls_cache \nWHERE users_md5 = '9bc9012eb29c0bb2ae3cc7b5e78c2acf'); \n\n \n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=1.06..1.07 rows=1 width=0) (actual time=0.048..0.048 \nrows=1 loops=1)\n One-Time Filter: $0\n InitPlan 1 (returns $0)\n -> Index Scan using read_acls_cache_users_md5_idx on \nread_acls_cache (cost=0.00..55664.21 rows=52517 width=0) (actual \ntime=0.045..0.045 rows=1 loops=1)\n Index Cond: ((users_md5)::text = \n'9bc9012eb29c0bb2ae3cc7b5e78c2acf'::text)\n Total runtime: 0.087 ms\n(6 rows)\n\n466/0.087 = 5360 thanks !\n\nkind regards\n\nben\n",
"msg_date": "Wed, 10 Mar 2010 17:59:38 +0100",
"msg_from": "Benoit Delbosc <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan inside EXISTS clause"
}
] |
[
{
"msg_contents": "Hi there,\n\nI'm after a little bit of advice on the shared_buffers setting (I have\nread the various docs on/linked from the performance tuning wiki page,\nsome very helpful stuff there so thanks to those people).\n\nI am setting up a 64bit Linux server running Postgresql 8.3, the\nserver has 64gigs of memory and Postgres is the only major application\nrunning on it. (This server is to go alongside some existing 8.3\nservers, we will look at 8.4/9 migration later)\n\nI'm basically wondering how the postgresql cache (ie shared_buffers)\nand the OS page_cache interact. The general advice seems to be to\nassign 1/4 of RAM to shared buffers.\n\nI don't have a good knowledge of the internals but I'm wondering if\nthis will effectively mean that roughly the same amount of RAM being\nused for the OS page cache will be used for redundantly caching\nsomething the Postgres is caching as well?\n\nIE when Postgres reads something from disk it will go into both the OS\npage cache and the Postgresql shared_buffers and the OS page cache\ncopy is unlikely to be useful for anything.\n\nIf that is the case what are the downsides to having less overlap\nbetween the caches, IE heavily favouring one or the other, such as\nallocating shared_buffers to a much larger percentage (such as 90-95%\nof expected 'free' memory).\n\nPaul\n",
"msg_date": "Thu, 11 Mar 2010 12:28:21 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffers advice"
},
{
"msg_contents": "There seems to be a wide range of opinion on this .... I am new to PG and\ngrew up on Oracle, where more SGA is always a good thing ... I know people\nwho run Oracle on 2TB Superdome's with titanic SGA sizes to keep the whole\nDB in RAM. I'd be using a 40GB+ Oracle SGA on that box of yours.\n\nA lot of the folks here say that there isn't much performance to be gained\nby giving PG's buffer cache the bulk of the RAM .... if you have the\nopportunity, it'd be interesting to test it both ways with a representative\nworkload before going live. I for one would be very curious to see the\nresults.\n\nOne caveat: the PG child processes can collectively use a lot of transient\nRAM, e.g. for sorts and vaccuming, depending on config and workload. If this\ncauses swapping, or even uses up enough memory to effectively eliminate the\nOS buffer cache, it's going to hurt performance.\n\nCheers\nDave\n\nOn Wed, Mar 10, 2010 at 8:28 PM, Paul McGarry <[email protected]> wrote:\n\n> Hi there,\n>\n> I'm after a little bit of advice on the shared_buffers setting (I have\n> read the various docs on/linked from the performance tuning wiki page,\n> some very helpful stuff there so thanks to those people).\n>\n> I am setting up a 64bit Linux server running Postgresql 8.3, the\n> server has 64gigs of memory and Postgres is the only major application\n> running on it. (This server is to go alongside some existing 8.3\n> servers, we will look at 8.4/9 migration later)\n>\n> I'm basically wondering how the postgresql cache (ie shared_buffers)\n> and the OS page_cache interact. The general advice seems to be to\n> assign 1/4 of RAM to shared buffers.\n>\n> I don't have a good knowledge of the internals but I'm wondering if\n> this will effectively mean that roughly the same amount of RAM being\n> used for the OS page cache will be used for redundantly caching\n> something the Postgres is caching as well?\n>\n> IE when Postgres reads something from disk it will go into both the OS\n> page cache and the Postgresql shared_buffers and the OS page cache\n> copy is unlikely to be useful for anything.\n>\n> If that is the case what are the downsides to having less overlap\n> between the caches, IE heavily favouring one or the other, such as\n> allocating shared_buffers to a much larger percentage (such as 90-95%\n> of expected 'free' memory).\n>\n> Paul\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThere seems to be a wide range of opinion on this .... I am new to PG and grew up on Oracle, where more SGA is always a good thing ... I know people who run Oracle on 2TB Superdome's with titanic SGA sizes to keep the whole DB in RAM. I'd be using a 40GB+ Oracle SGA on that box of yours.\nA lot of the folks here say that there isn't much performance to be gained by giving PG's buffer cache the bulk of the RAM .... if you have the opportunity, it'd be interesting to test it both ways with a representative workload before going live. I for one would be very curious to see the results.\nOne caveat: the PG child processes can collectively use a lot of transient RAM, e.g. for sorts and vaccuming, depending on config and workload. If this causes swapping, or even uses up enough memory to effectively eliminate the OS buffer cache, it's going to hurt performance.\nCheersDaveOn Wed, Mar 10, 2010 at 8:28 PM, Paul McGarry <[email protected]> wrote:\nHi there,\n\nI'm after a little bit of advice on the shared_buffers setting (I have\nread the various docs on/linked from the performance tuning wiki page,\nsome very helpful stuff there so thanks to those people).\n\nI am setting up a 64bit Linux server running Postgresql 8.3, the\nserver has 64gigs of memory and Postgres is the only major application\nrunning on it. (This server is to go alongside some existing 8.3\nservers, we will look at 8.4/9 migration later)\n\nI'm basically wondering how the postgresql cache (ie shared_buffers)\nand the OS page_cache interact. The general advice seems to be to\nassign 1/4 of RAM to shared buffers.\n\nI don't have a good knowledge of the internals but I'm wondering if\nthis will effectively mean that roughly the same amount of RAM being\nused for the OS page cache will be used for redundantly caching\nsomething the Postgres is caching as well?\n\nIE when Postgres reads something from disk it will go into both the OS\npage cache and the Postgresql shared_buffers and the OS page cache\ncopy is unlikely to be useful for anything.\n\nIf that is the case what are the downsides to having less overlap\nbetween the caches, IE heavily favouring one or the other, such as\nallocating shared_buffers to a much larger percentage (such as 90-95%\nof expected 'free' memory).\n\nPaul\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 16 Mar 2010 00:17:51 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Dave Crooke wrote:\n> There seems to be a wide range of opinion on this .... I am new to PG \n> and grew up on Oracle, where more SGA is always a good thing ... I \n> know people who run Oracle on 2TB Superdome's with titanic SGA sizes \n> to keep the whole DB in RAM. I'd be using a 40GB+ Oracle SGA on that \n> box of yours.\n\nI wouldn't call it opinion so much as a series of anecdotes all \nsuggesting the same thing: that you cannot translate SGA practice into \nPostgreSQL and expect that to work the same way. Some data points:\n\n-An academic study at Duke suggested 40% of RAM was optimal for their \nmixed workload, but that was a fairly small amount of RAM. \nhttp://www.cs.duke.edu/~shivnath/papers/ituned.pdf\n\n-Tests done by Jignesh Shah at Sun not too long ago put diminishing \nreturns on a system with a bunch of RAM at 10GB, probably due to buffer \nlock contention issues (details beyond that number not in the slides, \nrecalling from memory of the talk itself): \nhttp://blogs.sun.com/jkshah/entry/postgresql_east_2008_talk_best\n\n-My warnings about downsides related to checkpoint issues with larger \nbuffer pools isn't an opinion at all; that's a fact based on limitations \nin how Postgres does its checkpoints. If we get something more like \nOracle's incremental checkpoint logic, this particular concern might go \naway.\n\n-Concerns about swapping, work_mem, etc. are all very real. All of us \nwho have had the database server process killed by the Linux OOM killer \nat least once know that's one OS you absolutely cannot push this too \nhard on. This is not unique to here, that issue exists in Oracle+SGA \nland as well: \nhttp://lkml.indiana.edu/hypermail/linux/kernel/0103.3/0906.html\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 16 Mar 2010 03:28:02 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "\n> -My warnings about downsides related to checkpoint issues with larger \n> buffer pools isn't an opinion at all; that's a fact based on limitations \n> in how Postgres does its checkpoints. If we get something more like \n> Oracle's incremental checkpoint logic, this particular concern might go \n> away.\n\nDoes PG issue checkpoint writes in \"sorted\" order ?\n\nI wonder about something, too : if your DB size is smaller than RAM, you \ncould in theory set shared_buffers to a size larger than your DB provided \nyou still have enough free RAM left for work_mem and OS writes management. \nHow does this interact with the logic which prevents seq-scans hogging \nshared_buffers ?\n",
"msg_date": "Tue, 16 Mar 2010 12:24:40 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Tue, Mar 16, 2010 at 7:24 AM, Pierre C <[email protected]> wrote:\n>\n> I wonder about something, too : if your DB size is smaller than RAM, you\n> could in theory set shared_buffers to a size larger than your DB provided\n> you still have enough free RAM left for work_mem and OS writes management.\n> How does this interact with the logic which prevents seq-scans hogging\n> shared_buffers ?\n\n\nI think the logic you are referring to is the clock sweep buffer accounting\nscheme. That just makes sure that the most popular pages stay in the\nbuffers. If your entire db fits in the buffer pool then it'll all get in\nthere real fast.\n\nTwo things to consider though:\n1. The checkpoint issue still stands.\n2. You should really mess around with your cost estimates if this is the\ncase. If you make random IO cost the same as sequential IO postgres will\nprefer index scans over bitmap index scans and table scans which makes sense\nif everything is in memory.\n\nOn Tue, Mar 16, 2010 at 7:24 AM, Pierre C <[email protected]> wrote:\n\n\nI wonder about something, too : if your DB size is smaller than RAM, you could in theory set shared_buffers to a size larger than your DB provided you still have enough free RAM left for work_mem and OS writes management. How does this interact with the logic which prevents seq-scans hogging shared_buffers ?\nI think the logic you are referring to is the clock sweep buffer accounting scheme. That just makes sure that the most popular pages stay in the buffers. If your entire db fits in the buffer pool then it'll all get in there real fast.\nTwo things to consider though:1. The checkpoint issue still stands.2. You should really mess around with your cost estimates if this is the case. If you make random IO cost the same as sequential IO postgres will prefer index scans over bitmap index scans and table scans which makes sense if everything is in memory.",
"msg_date": "Tue, 16 Mar 2010 09:26:15 -0400",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "> I think the logic you are referring to is the clock sweep buffer \n> accounting\n> scheme. That just makes sure that the most popular pages stay in the\n> buffers. If your entire db fits in the buffer pool then it'll all get in\n> there real fast.\n\n\nActually, I meant that in the case of a seq scan, PG will try to use just \na few buffers (a ring) in shared_buffers instead of thrashing the whole \nbuffers. But if there was actually a lot of free space in shared_buffers, \ndo the pages stay, or do they not ?\n",
"msg_date": "Tue, 16 Mar 2010 14:48:43 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "\"Pierre C\" <[email protected]> writes:\n> Does PG issue checkpoint writes in \"sorted\" order ?\n\nNo. IIRC, a patch for that was submitted, and rejected because no\nsignificant performance improvement could be demonstrated. We don't\nhave enough information about the actual on-disk layout to be very\nintelligent about this, so it's better to just issue the writes and\nlet the OS sort them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Mar 2010 10:30:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice "
},
{
"msg_contents": "On Tue, Mar 16, 2010 at 1:48 PM, Pierre C <[email protected]> wrote:\n> Actually, I meant that in the case of a seq scan, PG will try to use just a\n> few buffers (a ring) in shared_buffers instead of thrashing the whole\n> buffers. But if there was actually a lot of free space in shared_buffers, do\n> the pages stay, or do they not ?\n\nThey don't. The logic only kicks in if the table is expected to be >\n1/4 of shared buffers though.\n\n-- \ngreg\n",
"msg_date": "Tue, 16 Mar 2010 14:51:11 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane <[email protected]> wrote:\n> \"Pierre C\" <[email protected]> writes:\n>> Does PG issue checkpoint writes in \"sorted\" order ?\n>\n> No. IIRC, a patch for that was submitted, and rejected because no\n> significant performance improvement could be demonstrated. We don't\n> have enough information about the actual on-disk layout to be very\n> intelligent about this, so it's better to just issue the writes and\n> let the OS sort them.\n\nKeep in mind that postgres is issuing writes to the OS buffer cache.\nIt defers fsyncing the files as late as it can in the hopes that most\nof those buffers will be written out by the OS before then. That gives\nthe OS a long time window in which to flush them out in whatever order\nand whatever schedule is most convenient.\n\nIf the OS filesystem buffer cache is really small then that might not\nwork so well. It might be worth rerunning those benchmarks on a\nmachine with shared buffers taking up all of RAM.\n\n\n-- \ngreg\n",
"msg_date": "Tue, 16 Mar 2010 14:53:38 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Pierre C wrote:\n> Actually, I meant that in the case of a seq scan, PG will try to use \n> just a few buffers (a ring) in shared_buffers instead of thrashing the \n> whole buffers. But if there was actually a lot of free space in \n> shared_buffers, do the pages stay, or do they not ?\n\nPages inserted into the ring buffer and later re-used for new data do \nnot stay behind even if there is room for them. There's a potential \nimprovement possible in that code involving better management of the \nsituation where the buffer cache hasn't actually reached full capacity \nyet, but as it's an unusual case it's hard to justify optimizing for. \nBesides, the hope is that in this case the OS cache will end up caching \neverything anyway until it has a reason to evict it. So if you follow \nthe rest of the data suggesting you should not give all the memory to \nPostgreSQL to manage, you end up with a reasonable solution to this \nproblem anyway. Those pages will just live in the OS cache instead of \nthe database's, with only a few trickling in and staying behind each \ntime you do a sequential scan.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 16 Mar 2010 16:51:49 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Greg Stark wrote:\n> On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane <[email protected]> wrote:\n> \n>> \"Pierre C\" <[email protected]> writes:\n>> \n>>> Does PG issue checkpoint writes in \"sorted\" order ?\n>>> \n>> No. IIRC, a patch for that was submitted, and rejected because no\n>> significant performance improvement could be demonstrated.\n> If the OS filesystem buffer cache is really small then that might not\n> work so well. It might be worth rerunning those benchmarks on a\n> machine with shared buffers taking up all of RAM.\n> \n\nHere's the original patch again: \nhttp://archives.postgresql.org/message-id/[email protected]\n\nI was the person who tried to reproduce the suggested 10% pgbench \nspeedup on a similar system and couldn't replicate any improvement. \nNever was sure what was going on to show such a difference on the \nreference system used to develop the patch versus mine, since they were \npretty similar. Possibly some positive interaction with LVM in the test \ncase I didn't have. Maybe the actual reason sorting helped was \nlimitations in the HP P400 controller used there I wasn't running into \nwith the Areca card I used. And the always popular \"didn't account \nfully for all pgbench run to run variation\" possibility crossed my mind \ntoo--that the original observed speedup wasn't caused by the patch but \nby something else.\n\nI did not go out of my way to find test conditions where the patch would \nmore likely to help, like the situation you describe where \nshared_buffers was really large relative to the OS cache. Since the \npatch complicates the checkpoint code and requires some working memory \nto operate, it would have to be a unquestionable win using standard \npractices before it was worth applying. If it only helps in a situation \npeople are unlikely to use in the field, and it net negative for \neveryone else, that's still going to end up on the interesting but \nrejected idea scrapheap at the end of the day.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 16 Mar 2010 16:53:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Greg Stark escribi�:\n> On Tue, Mar 16, 2010 at 2:30 PM, Tom Lane <[email protected]> wrote:\n> > \"Pierre C\" <[email protected]> writes:\n> >> Does PG issue checkpoint writes in \"sorted\" order ?\n> >\n> > No. �IIRC, a patch for that was submitted, and rejected because no\n> > significant performance improvement could be demonstrated. �We don't\n> > have enough information about the actual on-disk layout to be very\n> > intelligent about this, so it's better to just issue the writes and\n> > let the OS sort them.\n> \n> Keep in mind that postgres is issuing writes to the OS buffer cache.\n> It defers fsyncing the files as late as it can in the hopes that most\n> of those buffers will be written out by the OS before then. That gives\n> the OS a long time window in which to flush them out in whatever order\n> and whatever schedule is most convenient.\n\nMaybe it would make more sense to try to reorder the fsync calls\ninstead.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 16 Mar 2010 18:30:54 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Maybe it would make more sense to try to reorder the fsync calls\n> instead.\n\nReorder to what, though? You still have the problem that we don't know\nmuch about the physical layout on-disk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Mar 2010 17:39:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice "
},
{
"msg_contents": "Tom Lane escribi�:\n> Alvaro Herrera <[email protected]> writes:\n> > Maybe it would make more sense to try to reorder the fsync calls\n> > instead.\n> \n> Reorder to what, though? You still have the problem that we don't know\n> much about the physical layout on-disk.\n\nWell, to block numbers as a first step.\n\nHowever, this reminds me that sometimes we take the block-at-a-time\nextension policy too seriously. We had a customer that had a\nperformance problem because they were inserting lots of data to TOAST\ntables, causing very frequent extensions. I kept wondering whether an\nallocation policy that allocated several new blocks at a time could be\nuseful (but I didn't try it). This would also alleviate fragmentation,\nthus helping the physical layout be more similar to logical block\nnumbers.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 16 Mar 2010 18:49:03 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane escribi�:\n>> Reorder to what, though? You still have the problem that we don't know\n>> much about the physical layout on-disk.\n\n> Well, to block numbers as a first step.\n\nfsync is a file-based operation, and we know exactly zip about the\nrelative positions of different files on the disk.\n\n> However, this reminds me that sometimes we take the block-at-a-time\n> extension policy too seriously.\n\nYeah, that's a huge performance penalty in some circumstances.\n\n> We had a customer that had a\n> performance problem because they were inserting lots of data to TOAST\n> tables, causing very frequent extensions. I kept wondering whether an\n> allocation policy that allocated several new blocks at a time could be\n> useful (but I didn't try it). This would also alleviate fragmentation,\n> thus helping the physical layout be more similar to logical block\n> numbers.\n\nThat's not going to do anything towards reducing the actual I/O volume.\nAlthough I suppose it might be useful if it just cuts the number of\nseeks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Mar 2010 17:58:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice "
},
{
"msg_contents": "Tom Lane escribió:\n> Alvaro Herrera <[email protected]> writes:\n> > Tom Lane escribi�:\n> >> Reorder to what, though? You still have the problem that we don't know\n> >> much about the physical layout on-disk.\n> \n> > Well, to block numbers as a first step.\n> \n> fsync is a file-based operation, and we know exactly zip about the\n> relative positions of different files on the disk.\n\nDoh, right, I was thinking in the sync-file-range kind of API.\n\n\n> > We had a customer that had a\n> > performance problem because they were inserting lots of data to TOAST\n> > tables, causing very frequent extensions. I kept wondering whether an\n> > allocation policy that allocated several new blocks at a time could be\n> > useful (but I didn't try it). This would also alleviate fragmentation,\n> > thus helping the physical layout be more similar to logical block\n> > numbers.\n> \n> That's not going to do anything towards reducing the actual I/O volume.\n> Although I suppose it might be useful if it just cuts the number of\n> seeks.\n\nOh, they had no problems with I/O volume. It was relation extension\nlock that was heavily contended for them.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 16 Mar 2010 19:20:58 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane escribió:\n>> That's not going to do anything towards reducing the actual I/O volume.\n>> Although I suppose it might be useful if it just cuts the number of\n>> seeks.\n\n> Oh, they had no problems with I/O volume. It was relation extension\n> lock that was heavily contended for them.\n\nReally? I guess that serialized all the I/O ... I'll bet if we got rid\nof that locking somehow, they *would* have a problem with I/O volume.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Mar 2010 18:25:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice "
},
{
"msg_contents": "Tom Lane escribi�:\n> Alvaro Herrera <[email protected]> writes:\n> > Tom Lane escribi�:\n> >> That's not going to do anything towards reducing the actual I/O volume.\n> >> Although I suppose it might be useful if it just cuts the number of\n> >> seeks.\n> \n> > Oh, they had no problems with I/O volume. It was relation extension\n> > lock that was heavily contended for them.\n> \n> Really? I guess that serialized all the I/O ... I'll bet if we got rid\n> of that locking somehow, they *would* have a problem with I/O volume.\n\nWell, that would solve the problem as far as I'm concerned and they'd\nhave to start talking to their storage provider ;-)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 16 Mar 2010 19:29:22 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Maybe it would make more sense to try to reorder the fsync calls\n> instead.\n> \n\nThe pretty obvious left behind idea from 8.3 spread checkpoint \ndevelopment was to similarly spread the fsync calls around. Given that \nwe know, for example, Linux with ext3 is going to dump the whole \nfilesystem write cache out when the fsync call comes in, the way they're \ncurrently scheduled has considerably potential for improvement.\n\nUnfortunately, since the tuning on that is going to be very platform \ndependent and require a lot of benchmarking work, I think we need a \nperformance farm up and running as a prerequisite to finishing that work \noff. The spread checkpoint stuff was a much more obvious improvement, \nand that was hard enough to quantify usefully and test.\n\nReturning to the idea of the sorted checkpoints patch as a simple \nexample, if it were possible to just push that patch to a test repo and \nsee how that changed typical throughput/latency against a \nwell-established history, it would be a lot easier to figure out if \nsomething like that is sensible to consider or not. I'm not sure how to \nmake progress on similar ideas about tuning closer to the filesystem \nlevel without having something automated that takes over the actual \nbenchmark running and data recording steps; it's just way too time \nconsuming to do those right now with every tool that's available for \nPostgreSQL so far. That's the problem I work on, there are easily a \nhalf dozen good ideas for improvements here floating around where coding \ntime is dwarfed by required performance validation time.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 16 Mar 2010 19:54:52 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "See below:\n\n\nOn Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry <[email protected]> wrote:\n\n> Hi there,\n>\n> I'm after a little bit of advice on the shared_buffers setting (I have\n> read the various docs on/linked from the performance tuning wiki page,\n> some very helpful stuff there so thanks to those people).\n>\n> I am setting up a 64bit Linux server running Postgresql 8.3, the\n> server has 64gigs of memory and Postgres is the only major application\n> running on it. (This server is to go alongside some existing 8.3\n> servers, we will look at 8.4/9 migration later)\n>\n> I'm basically wondering how the postgresql cache (ie shared_buffers)\n> and the OS page_cache interact. The general advice seems to be to\n> assign 1/4 of RAM to shared buffers.\n>\n> I don't have a good knowledge of the internals but I'm wondering if\n> this will effectively mean that roughly the same amount of RAM being\n> used for the OS page cache will be used for redundantly caching\n> something the Postgres is caching as well?\n>\n> IE when Postgres reads something from disk it will go into both the OS\n> page cache and the Postgresql shared_buffers and the OS page cache\n> copy is unlikely to be useful for anything.\n>\n> If that is the case what are the downsides to having less overlap\n> between the caches, IE heavily favouring one or the other, such as\n> allocating shared_buffers to a much larger percentage (such as 90-95%\n> of expected 'free' memory).\n>\n> Pg apparently does not have an option of using direct IO with reads which\nsome other databases do (the O_DIRECT mode). Therefore, double-buffering\nwith read operations seems unavoidable. Counterintuitively, it may be a\ngood idea to just rely on OS buffering and keep shared_buffers rather small,\nsay, 512MB.\n\nVJ\n\n\n\n\n> Paul\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSee below:On Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry <[email protected]> wrote:\n\nHi there,\n\nI'm after a little bit of advice on the shared_buffers setting (I have\nread the various docs on/linked from the performance tuning wiki page,\nsome very helpful stuff there so thanks to those people).\n\nI am setting up a 64bit Linux server running Postgresql 8.3, the\nserver has 64gigs of memory and Postgres is the only major application\nrunning on it. (This server is to go alongside some existing 8.3\nservers, we will look at 8.4/9 migration later)\n\nI'm basically wondering how the postgresql cache (ie shared_buffers)\nand the OS page_cache interact. The general advice seems to be to\nassign 1/4 of RAM to shared buffers.\n\nI don't have a good knowledge of the internals but I'm wondering if\nthis will effectively mean that roughly the same amount of RAM being\nused for the OS page cache will be used for redundantly caching\nsomething the Postgres is caching as well?\n\nIE when Postgres reads something from disk it will go into both the OS\npage cache and the Postgresql shared_buffers and the OS page cache\ncopy is unlikely to be useful for anything.\n\nIf that is the case what are the downsides to having less overlap\nbetween the caches, IE heavily favouring one or the other, such as\nallocating shared_buffers to a much larger percentage (such as 90-95%\nof expected 'free' memory).\nPg apparently does not have an option of using direct IO with reads which some\nother databases do (the O_DIRECT mode). Therefore, double-buffering\nwith read operations seems unavoidable. Counterintuitively, it may\nbe a good idea to just rely on OS buffering and keep shared_buffers\nrather small, say, 512MB.VJ\n \nPaul\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 17 Mar 2010 11:08:03 -0400",
"msg_from": "VJK <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: shared_buffers advice"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> I'm not sure how to make progress on similar ideas about\n> tuning closer to the filesystem level without having something automated\n> that takes over the actual benchmark running and data recording steps; it's\n> just way too time consuming to do those right now with every tool that's\n> available for PostgreSQL so far. That's the problem I work on, there are\n> easily a half dozen good ideas for improvements here floating around where\n> coding time is dwarfed by required performance validation time.\n\nI still think the best tool around currently for this kind of testing is\ntsung, but I've yet to have the time to put money where my mouth is, as\nthey say. Still, I'd be happy to take some time a help you decide if\nit's the tool you want to base your performance testing suite on or not.\n\n http://tsung.erlang-projects.org/\n\nRegards,\n-- \ndim\n",
"msg_date": "Thu, 18 Mar 2010 11:24:25 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Dimitri Fontaine wrote:\n> I still think the best tool around currently for this kind of testing is\n> tsung\n\nI am happy to say that for now, pgbench is the only actual testing tool \nsupported. Done; now I don't need tsung. \n\nHowever, that doesn't actually solve any of the problems I was talking \nabout though, which is why I'm not even talking about that part. We \nneed the glue to pull out software releases, run whatever testing tool \nis appropriate, and then save the run artifacts in some standardized \nform so they can be referenced with associated build/configuration \ninformation to track down a regression when it does show up. Building \nthose boring bits are the real issue here; load testing tools are easy \nto find because those are fun to work on.\n\nAnd as a general commentary on the vision here, tsung will never fit \ninto this anyway because \"something that can run on the buildfarm \nmachines with the software they already have installed\" is the primary \ntarget. I don't see anything about tsung so interesting that it trumps \nthat priority, even though it is an interesting tool.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 18 Mar 2010 11:16:35 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> However, that doesn't actually solve any of the problems I was talking about\n> though, which is why I'm not even talking about that part. We need the glue\n> to pull out software releases, run whatever testing tool is appropriate, and\n> then save the run artifacts in some standardized form so they can be\n> referenced with associated build/configuration information to track down a\n> regression when it does show up. Building those boring bits are the real\n> issue here; load testing tools are easy to find because those are fun to\n> work on.\n\nOh, ok. I missparsed the previous message. Tsung has a way to monitor OS\nlevel information, and I guess adding the build/configuration would\nbe... as easy as adding it to pgbench :)\n\n> And as a general commentary on the vision here, tsung will never fit into\n> this anyway because \"something that can run on the buildfarm machines with\n> the software they already have installed\" is the primary target. I don't\n> see anything about tsung so interesting that it trumps that priority, even\n> though it is an interesting tool.\n\nI though we might talk about a performance farm which would be quite\ndifferent, if only because to sustain a high enough client load you\nmight need more than one injector machine targeting a given server at\nonce.\n\nBut if you target the buildfarm, introducing new dependencies does sound\nlike a problem (that I can't evaluate the importance of).\n\nRegards,\n-- \ndim\n",
"msg_date": "Fri, 19 Mar 2010 10:30:55 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/3/11 Paul McGarry <[email protected]>:\n\n> I'm basically wondering how the postgresql cache (ie shared_buffers)\n> and the OS page_cache interact. The general advice seems to be to\n> assign 1/4 of RAM to shared buffers.\n>\n> I don't have a good knowledge of the internals but I'm wondering if\n> this will effectively mean that roughly the same amount of RAM being\n> used for the OS page cache will be used for redundantly caching\n> something the Postgres is caching as well?\n\nI have a similar problem but I can't see an answer in this thread.\n\nOur dedicated server has 16 GB RAM. Among other settings\nshared_buffers is 2 GB, effective_cache_size is 12 GB.\n\nDo shared_buffers duplicate contents of OS page cache? If so, how do I\nknow if 25% RAM is the right value for me? Actually it would not seem\nto be true - the less redundancy the better.\n\nAnother question - is there a tool or built-in statistic that tells\nwhen/how often/how much a table is read from disk? I mean physical\nread, not poll from OS cache to shared_buffers.\n\n-- \nKonrad Garus\n",
"msg_date": "Mon, 24 May 2010 13:25:57 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On May 24, 2010, at 4:25 AM, Konrad Garus wrote:\n\n> Do shared_buffers duplicate contents of OS page cache? If so, how do I\n> know if 25% RAM is the right value for me? Actually it would not seem\n> to be true - the less redundancy the better.\n\nYou can look into the pg_buffercache contrib module. \n\n> Another question - is there a tool or built-in statistic that tells\n> when/how often/how much a table is read from disk? I mean physical\n> read, not poll from OS cache to shared_buffers.\n\nWell, the pg_stat_* tables tell you how much logical IO is going on, but postgres has no way of knowing how effective the OS or disk controller caches are.",
"msg_date": "Mon, 24 May 2010 08:27:43 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "[SPAM] Re: shared_buffers advice"
},
{
"msg_contents": "On Wed, Mar 10, 2010 at 9:28 PM, Paul McGarry <[email protected]> wrote:\n> Hi there,\n>\n> I'm after a little bit of advice on the shared_buffers setting (I have\n> read the various docs on/linked from the performance tuning wiki page,\n> some very helpful stuff there so thanks to those people).\n>\n> I am setting up a 64bit Linux server running Postgresql 8.3, the\n> server has 64gigs of memory and Postgres is the only major application\n> running on it. (This server is to go alongside some existing 8.3\n> servers, we will look at 8.4/9 migration later)\n>\n> I'm basically wondering how the postgresql cache (ie shared_buffers)\n> and the OS page_cache interact. The general advice seems to be to\n> assign 1/4 of RAM to shared buffers.\n>\n> I don't have a good knowledge of the internals but I'm wondering if\n> this will effectively mean that roughly the same amount of RAM being\n> used for the OS page cache will be used for redundantly caching\n> something the Postgres is caching as well?\n>\n> IE when Postgres reads something from disk it will go into both the OS\n> page cache and the Postgresql shared_buffers and the OS page cache\n> copy is unlikely to be useful for anything.\n>\n> If that is the case what are the downsides to having less overlap\n> between the caches, IE heavily favouring one or the other, such as\n> allocating shared_buffers to a much larger percentage (such as 90-95%\n> of expected 'free' memory).\n\nI've personally heard tons of anecdotal evidence wrt shared buffers\nsetting. There is a bit of benchmarking info suggesting you can eek\nmarginal gains via shared buffers setting but you have to take\n(unfortunately) o/s, hardware, filesystem and other factors all into\naccount.\n\nHere is what I'm pretty confident about saying:\n*) a page fault to disk is a much bigger deal than a fault to pg cache\nvs os/ cache.\n\nmany people assume that raising shared buffers decreases the chance of\na disk fault. it doesn't -- at least not in the simple way you would\nthink -- all modern o/s aggressively cache filesystem data already so\nwe are simply layering over the o/s cache.\n\nIf your database is really big -- anything that reduces disk faults is\na win and increases them is a loss. tps measurements according to\npgbench are not as interesting to me as iops from the disk system.\n\n*) shared buffer affects are hard to detect in the single user case.\n\nThe performance of a single 'non disk bound' large query will perform\npretty much the same regardless of how you set shared buffers. In\nother words, you will not be able to easily measure the differences in\nthe setting outside of a real or simulated production workload.\n\n*) shared_buffers is one of the _least_ important performance settings\nin postgresql.conf\n\nMany settings, like work_mem, planner tweaks, commit settings,\nautovacuum settings, can dramatically impact your workload performance\nin spectacular ways, but tend to be 'case by case' specific. shared\nbuffers affects _everything_, albeit in very subtle ways, so you have\nto be careful.\n\n*) I sometimes wonder if the o/s should just manage everything.\n\nwe just said goodbye to the fsm (thank goodness for that!) -- what\nabout a fully o/s managed cache? goodbye svsv ipc? note my views\nhere are very simplistic -- I don't have anything close to a full\nunderstanding of the cache machinery in the database.\n\nmerlin\n",
"msg_date": "Mon, 24 May 2010 14:25:58 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/24 Merlin Moncure <[email protected]>:\n\n> *) a page fault to disk is a much bigger deal than a fault to pg cache\n> vs os/ cache.\n\nThat was my impression. That's why I did not touch our 2/16 GB setting\nright away. I guess that 2 more gigabytes in OS cache is better than 2\nmore (duplicated) gigabytes in PG shared_buffers. In our case 2 GB\nshared_buffers appears to be enough to avoid thrashing between OS and\nPG.\n\n> *) shared_buffers is one of the _least_ important performance settings\n> in postgresql.conf\n>\n> Many settings, like work_mem, planner tweaks, commit settings,\n> autovacuum settings\n\nCan you recommend any sources on these parameters, especially commit\nsettings and planner tweaks?\n\n\nThank you so much for the whole answer! Not only it addresses the\nimmediate question, but also many of the unasked that I had in the\nback of my head. It's brief and gives a broad view over all the\nperformance concerns. It should be part of documentation or the first\npage of performance wiki. Have you copied it from somewhere?\n\n-- \nKonrad Garus\n",
"msg_date": "Tue, 25 May 2010 11:58:24 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Tue, May 25, 2010 at 5:58 AM, Konrad Garus <[email protected]> wrote:\n> 2010/5/24 Merlin Moncure <[email protected]>:\n>\n>> *) a page fault to disk is a much bigger deal than a fault to pg cache\n>> vs os/ cache.\n>\n> That was my impression. That's why I did not touch our 2/16 GB setting\n> right away. I guess that 2 more gigabytes in OS cache is better than 2\n> more (duplicated) gigabytes in PG shared_buffers. In our case 2 GB\n> shared_buffers appears to be enough to avoid thrashing between OS and\n> PG.\n>\n>> *) shared_buffers is one of the _least_ important performance settings\n>> in postgresql.conf\n>>\n>> Many settings, like work_mem, planner tweaks, commit settings,\n>> autovacuum settings\n>\n> Can you recommend any sources on these parameters, especially commit\n> settings and planner tweaks?\n>\n>\n> Thank you so much for the whole answer! Not only it addresses the\n> immediate question, but also many of the unasked that I had in the\n> back of my head. It's brief and gives a broad view over all the\n> performance concerns. It should be part of documentation or the first\n> page of performance wiki. Have you copied it from somewhere?\n\nThank you for your nice comments. This was strictly a brain dump from\nyours truly. There is a fairly verbose guide on the wiki\n(http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server).\nThere is a lot of good info there but it's missing a few things\n(from_collapse_limit for example).\n\nI would prefer to see the annotated performance oriented .conf\nsettings to be written in terms of trade offs (too low? X too high? Y\nsetting in order to get? Z). For example, did you know that if crank\nmax_locks_per_transaction you also increase the duration of every\nquery that hits pg_locks() -- well, now you do :-).\n\nmerlin\n",
"msg_date": "Tue, 25 May 2010 17:03:33 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/24 Konrad Garus <[email protected]>:\n> 2010/3/11 Paul McGarry <[email protected]>:\n>\n>> I'm basically wondering how the postgresql cache (ie shared_buffers)\n>> and the OS page_cache interact. The general advice seems to be to\n>> assign 1/4 of RAM to shared buffers.\n>>\n>> I don't have a good knowledge of the internals but I'm wondering if\n>> this will effectively mean that roughly the same amount of RAM being\n>> used for the OS page cache will be used for redundantly caching\n>> something the Postgres is caching as well?\n>\n> I have a similar problem but I can't see an answer in this thread.\n>\n> Our dedicated server has 16 GB RAM. Among other settings\n> shared_buffers is 2 GB, effective_cache_size is 12 GB.\n>\n> Do shared_buffers duplicate contents of OS page cache? If so, how do I\n> know if 25% RAM is the right value for me? Actually it would not seem\n> to be true - the less redundancy the better.\n\nAt the moment where a block is requested for the first time (usualy\n8kb from postgres, so in fact 2 blocks in OS), you have 'duplicate'\nbuffers.\nBut, depending of your workload, it is not so bad because those 2\nblocks should not be requested untill some time (because in postgresql\nshared buffers) and should be evicted by OS in favor of new blocks\nrequests.\nAgain it depends on your workload, if you have a case where you\nrefresh a lot the shared buffers then you will have more blocks in the\n2 caches at the same time.\n\nYou can try pgfincore extension to grab stats from OS cache and/or\npatch postgresql if you want real stats ;)\npgbuffercache is provided with postgresql and deliver very usefull information :\nhttp://www.postgresql.org/docs/8.4/interactive/pgbuffercache.html\n\n>\n> Another question - is there a tool or built-in statistic that tells\n> when/how often/how much a table is read from disk? I mean physical\n> read, not poll from OS cache to shared_buffers.\n>\n> --\n> Konrad Garus\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Wed, 26 May 2010 11:48:08 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/26 Cédric Villemain <[email protected]>:\n\n> At the moment where a block is requested for the first time (usualy\n> 8kb from postgres, so in fact 2 blocks in OS), you have 'duplicate'\n> buffers.\n> But, depending of your workload, it is not so bad because those 2\n> blocks should not be requested untill some time (because in postgresql\n> shared buffers) and should be evicted by OS in favor of new blocks\n> requests.\n\nSince pg_buffercache is 4-8 times smaller, it would seem to be\nextremely rare to me. And when PG requests a block, it also needs to\nevict something from shared_buffers.\n\n> You can try pgfincore extension to grab stats from OS cache and/or\n> patch postgresql if you want real stats ;)\n\nThank you! It seems to be the tool I was looking for. Could help me\nlocate and troubleshoot the hogs in page cache. I also find the\nsnapshot/restore function promising. Every morning our cache is cold\nor filled with irrelevant data left by nightly batch jobs, thus\nseverely impacting the performance. Seems to be exactly what this tool\nis for.\n\nHow does it work? How stable is it? Can we use it in production on a\ndaily basis?\n\n> pgbuffercache is provided with postgresql and deliver very usefull information :\n> http://www.postgresql.org/docs/8.4/interactive/pgbuffercache.html\n\nThank you. I already am using it. I've already found a few hogs with it.\n\n-- \nKonrad Garus\n",
"msg_date": "Thu, 27 May 2010 09:24:23 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Konrad Garus <[email protected]>:\n> 2010/5/26 Cédric Villemain <[email protected]>:\n>\n>> At the moment where a block is requested for the first time (usualy\n>> 8kb from postgres, so in fact 2 blocks in OS), you have 'duplicate'\n>> buffers.\n>> But, depending of your workload, it is not so bad because those 2\n>> blocks should not be requested untill some time (because in postgresql\n>> shared buffers) and should be evicted by OS in favor of new blocks\n>> requests.\n>\n> Since pg_buffercache is 4-8 times smaller, it would seem to be\n> extremely rare to me. And when PG requests a block, it also needs to\n> evict something from shared_buffers.\n\n3 very important things :\n* postgresql shared buffers are database oriented\n* OS shared buffers are *more* complex and will not evict the same\nbuffers as postgres.\n* OS page cache can handle tens of GB where postgres usually have no\ngain in performance over 10GB.\n\n>\n>> You can try pgfincore extension to grab stats from OS cache and/or\n>> patch postgresql if you want real stats ;)\n>\n> Thank you! It seems to be the tool I was looking for. Could help me\n> locate and troubleshoot the hogs in page cache. I also find the\n> snapshot/restore function promising. Every morning our cache is cold\n> or filled with irrelevant data left by nightly batch jobs, thus\n> severely impacting the performance. Seems to be exactly what this tool\n> is for.\n>\n> How does it work? How stable is it? Can we use it in production on a\n> daily basis?\n\nIt works thanks to mincore/posix_fadvise stuff : you need linux.\nIt is stable enough in my own experiment. I did use it for debugging\npurpose in production servers with succes.\nBUT :\n* snapshot/restore is done via a flat_file (one per segment or\ntable/index) and *it is not removed* when you drop a table.\n* it might exist corner case not yet handled (like snapshot a\ndatabase, change things like drop table, truncate table, then restore)\n\nIt needs some polish to be totally production ready but the job can be done.\n\n\n>\n>> pgbuffercache is provided with postgresql and deliver very usefull information :\n>> http://www.postgresql.org/docs/8.4/interactive/pgbuffercache.html\n>\n> Thank you. I already am using it. I've already found a few hogs with it.\n>\n> --\n> Konrad Garus\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 27 May 2010 09:50:48 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Cédric Villemain <[email protected]>:\n\n> It works thanks to mincore/posix_fadvise stuff : you need linux.\n> It is stable enough in my own experiment. I did use it for debugging\n> purpose in production servers with succes.\n\nWhat impact does it have on performance?\n\nDoes it do anything, is there any interaction between it and PG/OS,\nwhen it's not executing a command explicitly invoked by me?\n\n-- \nKonrad Garus\n",
"msg_date": "Thu, 27 May 2010 10:59:50 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Konrad Garus <[email protected]>:\n> 2010/5/27 Cédric Villemain <[email protected]>:\n>\n>> It works thanks to mincore/posix_fadvise stuff : you need linux.\n>> It is stable enough in my own experiment. I did use it for debugging\n>> purpose in production servers with succes.\n>\n> What impact does it have on performance?\n\npgmincore() and pgmincore_snapshot() both are able to mmap up to 1GB.\nI didn't mesure a performance impact. But I haven't enough benchmarks/test yet.\n\n>\n> Does it do anything, is there any interaction between it and PG/OS,\n> when it's not executing a command explicitly invoked by me?\n\npgfincore does nothing until you call one of the functions.\n\nReducing the mmap window is faisable, and I had start something to use\neffective_io_concurrency in order to improve prefetch (for restore)\nbut this part of the code is not yet finished.\n\n>\n> --\n> Konrad Garus\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 27 May 2010 12:21:26 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Cédric Villemain <[email protected]>:\n\n> pgmincore() and pgmincore_snapshot() both are able to mmap up to 1GB.\n\nDoes it mean they can occupy 1 GB of RAM? How does it relate to amount\nof page buffers mapped by OS?\n\n-- \nKonrad Garus\n",
"msg_date": "Thu, 27 May 2010 17:18:41 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Konrad Garus <[email protected]>:\n> 2010/5/27 Cédric Villemain <[email protected]>:\n>\n>> pgmincore() and pgmincore_snapshot() both are able to mmap up to 1GB.\n>\n> Does it mean they can occupy 1 GB of RAM? How does it relate to amount\n> of page buffers mapped by OS?\n\nwell, that is the projection of file in memory. only projection, but\nthe memory is still acquire. It is ok to rework this part and project\nsomething like 128MB and loop. (in fact the code is needed for 9.0\nbecause segment can be > 1GB, I didn't check what is the optimum\nprojection size yet)\nSo both yes at your questions :)\n\n>\n> --\n> Konrad Garus\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 27 May 2010 17:24:28 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Cédric Villemain <[email protected]>:\n\n> well, that is the projection of file in memory. only projection, but\n> the memory is still acquire. It is ok to rework this part and project\n> something like 128MB and loop. (in fact the code is needed for 9.0\n> because segment can be > 1GB, I didn't check what is the optimum\n> projection size yet)\n> So both yes at your questions :)\n\nSo when I map 12 GB, this process will consume 1 GB and the time\nneeded to browse through the whole 12 GB buffer?\n\n-- \nKonrad Garus\n",
"msg_date": "Thu, 27 May 2010 17:51:43 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Konrad Garus <[email protected]>:\n> 2010/5/27 Cédric Villemain <[email protected]>:\n>\n>> well, that is the projection of file in memory. only projection, but\n>> the memory is still acquire. It is ok to rework this part and project\n>> something like 128MB and loop. (in fact the code is needed for 9.0\n>> because segment can be > 1GB, I didn't check what is the optimum\n>> projection size yet)\n>> So both yes at your questions :)\n>\n> So when I map 12 GB, this process will consume 1 GB and the time\n> needed to browse through the whole 12 GB buffer?\n\nExactly. And the time to browse depend on the number of blocks already\nin core memory.\nI am interested by tests results and benchmarks if you are going to do some :)\n\n\n>\n> --\n> Konrad Garus\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 27 May 2010 20:22:10 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/27 Cédric Villemain <[email protected]>:\n\n> Exactly. And the time to browse depend on the number of blocks already\n> in core memory.\n> I am interested by tests results and benchmarks if you are going to do some :)\n\nI am still thinking whether I want to do it on this prod machine.\nMaybe on something less critical first (but still with a good amount\nof memory mapped by page buffers).\n\nWhat system have you tested it on? Has it ever run on a few-gig system? :-)\n\n-- \nKonrad Garus\n",
"msg_date": "Fri, 28 May 2010 09:57:40 +0200",
"msg_from": "Konrad Garus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "2010/5/28 Konrad Garus <[email protected]>:\n> 2010/5/27 Cédric Villemain <[email protected]>:\n>\n>> Exactly. And the time to browse depend on the number of blocks already\n>> in core memory.\n>> I am interested by tests results and benchmarks if you are going to do some :)\n>\n> I am still thinking whether I want to do it on this prod machine.\n> Maybe on something less critical first (but still with a good amount\n> of memory mapped by page buffers).\n>\n> What system have you tested it on? Has it ever run on a few-gig system? :-)\n\ndatabases up to 300GB for the stats purpose.\nThe snapshot/restore was done for bases around 40-50GB but with only\n16GB of RAM.\n\nI really thing some improvments are posible before using it in\nproduction, even if it should work well as it is.\nAt least something to remove the orphan snapshot files (in case of\ndrop table, or truncate). And probably increase the quality of the\ncode around the prefetch.(better handling of\neffective_io_concurrency...the prefetch is linerar but blocks requests\nare grouped)\n\nIf you are able to test/benchs on a pre-production env, do it :)\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Fri, 28 May 2010 10:12:52 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Merlin Moncure wrote:\n> I would prefer to see the annotated performance oriented .conf\n> settings to be written in terms of trade offs (too low? X too high? Y\n> setting in order to get? Z). For example, did you know that if crank\n> max_locks_per_transaction you also increase the duration of every\n> query that hits pg_locks() -- well, now you do :-).\n> \n\nYou can't do this without providing more context and tools for people to \nmeasure their systems. At PGCon last week, I presented a talk \nspecifically about tuning shared_buffers and the checkpoint settings. \nWhat's come out of my research there is that you can stare at the data \nin pg_buffercache and pg_stat_bgwriter and classify systems based on the \ndistribution of usage counts in their buffer cache on how the background \nwriter copes with that. The right style of tuning to apply is dependent \non whether someone has a high proportion of buffers with a usage count \n >=2. A tuning guide that actually covered that in enough detail to be \nan improvement over what is in the \"Tuning Your PostgreSQL Server\" would \nbe overwhelming large, defeating the purpose of that document--providing \na fairly bite-size guide.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 28 May 2010 14:57:54 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Fri, May 28, 2010 at 2:57 PM, Greg Smith <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>\n>> I would prefer to see the annotated performance oriented .conf\n>> settings to be written in terms of trade offs (too low? X too high? Y\n>> setting in order to get? Z). For example, did you know that if crank\n>> max_locks_per_transaction you also increase the duration of every\n>> query that hits pg_locks() -- well, now you do :-).\n>>\n>\n> You can't do this without providing more context and tools for people to\n> measure their systems. At PGCon last week, I presented a talk specifically\n> about tuning shared_buffers and the checkpoint settings. What's come out of\n> my research there is that you can stare at the data in pg_buffercache and\n> pg_stat_bgwriter and classify systems based on the distribution of usage\n> counts in their buffer cache on how the background writer copes with that.\n> The right style of tuning to apply is dependent on whether someone has a\n> high proportion of buffers with a usage count >=2. A tuning guide that\n> actually covered that in enough detail to be an improvement over what is in\n> the \"Tuning Your PostgreSQL Server\" would be overwhelming large, defeating\n> the purpose of that document--providing a fairly bite-size guide.\n\nSure. IOW, a .conf guide should answer:\n*) What are the symptoms of misconfigured shared_buffers (too low/too high)?\n*) How do you confirm this?\n\nOur documentation is completely unclear wrt these questions, other\nthan to give some vague advice in order to get 'good performance'.\nI'm of the opinion (rightly or wrongly) that the prevailing opinions\non how to configure shared_buffers are based on special case\nbenchmarking information or simply made up. The dangers of setting it\ntoo high are very real (especially on linux) but this isn't mentioned;\ncontrast that to the care put into the fsync language. This is in the\nface of some prevailing myths (more shared_buffers = more cache =\nfaster) that have some grains of truth but aren't the whole story. I\njust helped out a friend that oversubscribed and blew up his linux\nbox...oom killer strikes again.\n\nI'm not complaining here mind you; I'd just like to filter out all the\nanecdotal information and similar noise. shared_buffers is a bit of a\nbugaboo because it is fairly subtle in how it interacts with\nproduction workloads and there is so little solid information out\nthere. I would *love* to see some formal verifiable tests showing >\n20% improvements resulting from shared_buffers tweaks that could be\nrepeated on other hardware/os installations. Got any notes for your\ntalk? :-)\n\nmerlin\n",
"msg_date": "Fri, 28 May 2010 16:30:58 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Merlin Moncure wrote:\n> I'm of the opinion (rightly or wrongly) that the prevailing opinions\n> on how to configure shared_buffers are based on special case\n> benchmarking information or simply made up.\n\nWell, you're wrong, but it's OK; we'll forgive you this time. It's true \nthat a lot of the earlier advice here wasn't very well tested across \nmultiple systems. I have a stack of data that supports the anecdotal \nguidelines are in the right ballpark now though, most of which is \nprotected by NDA. If you look at the spreadsheet at \nhttp://www.pgcon.org/2010/schedule/events/218.en.html you'll see three \nexamples I was able to liberate for public consumption, due to some \ncontributions by list regulars here (I'm working on a fourth right \nnow). The first one has shared_buffers set at 8GB on a 96GB server, at \nthe upper limit of where it's useful, and the database is using every \nbit of that more effectively than had it been given to the OS to \nmanage. (I'm starting to get access to test hardware to investigate why \nthere's an upper limit wall around 10GB for shared_buffers too) The \nother two are smaller systems, and they don't benefit nearly as much \nfrom giving the database memory given their workload. Basically it \ncomes down to two things:\n\n1) Are you getting a lot of buffers where the usage count is >=3? If \nnot, you can probably reduce shared_buffers and see better performance. \nThis is not the case with the first system shown, but is true on the \nsecond and third.\n\n2) Is the average size of the checkpoints too large? If so, you might \nhave to reduce shared_buffers in order to pull that down. Might even \nneed to pull down the checkpoint parameters too.\n\nWorkloads that don't like the database to have RAM certainly exist, but \nthere are just as many that appreciate every bit of memory you dedicated \nto it.\n\nWith all the tuning work I've been doing the last few months, the only \nthing I've realized the standard guidelines (as embodied by pgtune and \nthe wiki pages) are wrong is in regards to work_mem. You have to be \nmuch more careful with that than what pgtune in particular suggests. \nThe rest of the rules of thumb pgtune is based on and \"Tuning your \nPostgreSQL Server\" suggests are not bad.\n\nAccomplishing a major advance over the current state of things really \nneeds some captures of real application load from a production system of \nboth major types that we can playback, to give something more realistic \nthan one of the boring benchmark loads. Dimitri Fontaine is working on \nsome neat tools in that area, and now that we're working together more \nclosely I'm hoping we can push that work forward further. That's the \nreal limiting factor here now, assembling repeatable load testing that \nlooks like an application rather than a benchmark.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 28 May 2010 17:02:48 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Mon, May 24, 2010 at 12:25 PM, Merlin Moncure <[email protected]> wrote:\n> *) shared_buffers is one of the _least_ important performance settings\n> in postgresql.conf\n\nYes, and no. It's usually REALLY helpful to make sure it's more than\n8 or 24Megs. But it doesn't generally need to be huge to make a\ndifference.\n",
"msg_date": "Fri, 28 May 2010 15:11:15 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "If, like me, you came from the Oracle world, you may be tempted to throw a\nton of RAM at this. Don't. PG does not like it.\n\nOn Fri, May 28, 2010 at 4:11 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, May 24, 2010 at 12:25 PM, Merlin Moncure <[email protected]>\n> wrote:\n> > *) shared_buffers is one of the _least_ important performance settings\n> > in postgresql.conf\n>\n> Yes, and no. It's usually REALLY helpful to make sure it's more than\n> 8 or 24Megs. But it doesn't generally need to be huge to make a\n> difference.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIf, like me, you came from the Oracle world, you may be tempted to throw a ton of RAM at this. Don't. PG does not like it.On Fri, May 28, 2010 at 4:11 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, May 24, 2010 at 12:25 PM, Merlin Moncure <[email protected]> wrote:\n\n> *) shared_buffers is one of the _least_ important performance settings\n> in postgresql.conf\n\nYes, and no. It's usually REALLY helpful to make sure it's more than\n8 or 24Megs. But it doesn't generally need to be huge to make a\ndifference.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 28 May 2010 16:14:18 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Fri, May 28, 2010 at 5:02 PM, Greg Smith <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>\n>> I'm of the opinion (rightly or wrongly) that the prevailing opinions\n>> on how to configure shared_buffers are based on special case\n>> benchmarking information or simply made up.\n>\n> Well, you're wrong, but it's OK; we'll forgive you this time. It's true\n> that a lot of the earlier advice here wasn't very well tested across\n> multiple systems. I have a stack of data that supports the anecdotal\n> guidelines are in the right ballpark now though, most of which is protected\n> by NDA. If you look at the spreadsheet at\n> http://www.pgcon.org/2010/schedule/events/218.en.html you'll see three\n> examples I was able to liberate for public consumption, due to some\n> contributions by list regulars here (I'm working on a fourth right now).\n> The first one has shared_buffers set at 8GB on a 96GB server, at the upper\n> limit of where it's useful, and the database is using every bit of that more\n> effectively than had it been given to the OS to manage. (I'm starting to\n> get access to test hardware to investigate why there's an upper limit wall\n> around 10GB for shared_buffers too) The other two are smaller systems, and\n> they don't benefit nearly as much from giving the database memory given\n> their workload. Basically it comes down to two things:\n>\n> 1) Are you getting a lot of buffers where the usage count is >=3? If not,\n> you can probably reduce shared_buffers and see better performance. This is\n> not the case with the first system shown, but is true on the second and\n> third.\n>\n> 2) Is the average size of the checkpoints too large? If so, you might have\n> to reduce shared_buffers in order to pull that down. Might even need to\n> pull down the checkpoint parameters too.\n>\n> Workloads that don't like the database to have RAM certainly exist, but\n> there are just as many that appreciate every bit of memory you dedicated to\n> it.\n>\n> With all the tuning work I've been doing the last few months, the only thing\n> I've realized the standard guidelines (as embodied by pgtune and the wiki\n> pages) are wrong is in regards to work_mem. You have to be much more\n> careful with that than what pgtune in particular suggests. The rest of the\n> rules of thumb pgtune is based on and \"Tuning your PostgreSQL Server\"\n> suggests are not bad.\n>\n> Accomplishing a major advance over the current state of things really needs\n> some captures of real application load from a production system of both\n> major types that we can playback, to give something more realistic than one\n> of the boring benchmark loads. Dimitri Fontaine is working on some neat\n> tools in that area, and now that we're working together more closely I'm\n> hoping we can push that work forward further. That's the real limiting\n> factor here now, assembling repeatable load testing that looks like an\n> application rather than a benchmark.\n\nThis is great information -- exactly the kind of research I'm talking\nabout. btw I like being proved wrong! :-) I need some time to\nprocess this.\n\nmerlin\n",
"msg_date": "Fri, 28 May 2010 17:16:01 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
}
] |
[
{
"msg_contents": "Hi there,\n\nI'm after a little bit of advice on the shared_buffers setting (I have\nread the various docs on/linked from the performance tuning wiki page,\nsome very helpful stuff there so thanks to those people).\n\nI am setting up a 64bit Linux server running Postgresql 8.3, the\nserver has 64gigs of memory and Postgres is the only major application\nrunning on it. (This server is to go alongside some existing 8.3\nservers, we will look at 8.4/9 migration later)\n\nI'm basically wondering how the postgresql cache (ie shared_buffers)\nand the OS page_cache interact. The general advice seems to be to\nassign 1/4 of RAM to shared buffers.\n\nI don't have a good knowledge of the internals but I'm wondering if\nthis will effectively mean that roughly the same amount of RAM being\nused for the OS page cache will be used for redundantly caching\nsomething the Postgres is caching as well?\n\nIE when Postgres reads something from disk it will go into both the OS\npage cache and the Postgresql shared_buffers and the OS page cache\ncopy is unlikely to be useful for anything.\n\nIf that is the case what are the downsides to having less overlap\nbetween the caches, IE heavily favouring one or the other, such as\nallocating shared_buffers to a much larger percentage (such as 90-95%\nof expected 'free' memory).\n\nPaul\n\n(Apologies if two copies of this email arrive, I sent the first from\nan email address that wasn't directly subscribed to the list so it was\nblocked).\n",
"msg_date": "Thu, 11 Mar 2010 13:22:10 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffers advice"
},
{
"msg_contents": "On Mar 10, 2010, at 6:22 PM, Paul McGarry wrote:\n\n> Hi there,\n> \n> I'm after a little bit of advice on the shared_buffers setting (I have\n> read the various docs on/linked from the performance tuning wiki page,\n> some very helpful stuff there so thanks to those people).\n> \n> I am setting up a 64bit Linux server running Postgresql 8.3, the\n> server has 64gigs of memory and Postgres is the only major application\n> running on it. (This server is to go alongside some existing 8.3\n> servers, we will look at 8.4/9 migration later)\n> \n> I'm basically wondering how the postgresql cache (ie shared_buffers)\n> and the OS page_cache interact. The general advice seems to be to\n> assign 1/4 of RAM to shared buffers.\n> \n> I don't have a good knowledge of the internals but I'm wondering if\n> this will effectively mean that roughly the same amount of RAM being\n> used for the OS page cache will be used for redundantly caching\n> something the Postgres is caching as well?\n> \n> IE when Postgres reads something from disk it will go into both the OS\n> page cache and the Postgresql shared_buffers and the OS page cache\n> copy is unlikely to be useful for anything.\n> \n> If that is the case what are the downsides to having less overlap\n> between the caches, IE heavily favouring one or the other, such as\n> allocating shared_buffers to a much larger percentage (such as 90-95%\n> of expected 'free' memory).\n\nCache isn't all you have to worry about. There's also work_mem and the number of concurrent queries that you expect, and those may end up leaving you less than 25% of ram for shared_buffers - though probably not in your case. Also, I've read that 10GB is the upper end of where shared_buffers becomes useful, though I'm not entirely sure why. I think that rule of thumb has its roots in some heuristics around the double buffering effects you're asking about.\n\nI *can* say a 10GB shared_buffer value is working \"well\" with my 128GB of RAM..... whether or not it's \"optimal,\" I couldn't say without a lot of experimentation I can't afford to do right now. You might have a look at the pg_buffercache contrib module. It can tell you how utilized your shared buffers are.",
"msg_date": "Wed, 10 Mar 2010 21:16:55 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "Paul McGarry wrote:\n> IE when Postgres reads something from disk it will go into both the OS\n> page cache and the Postgresql shared_buffers and the OS page cache\n> copy is unlikely to be useful for anything.\n> \n\nThat's correct. However, what should happen over time is that the \npopular blocks in PostgreSQL's buffer cache, the hot ones that are used \nall the time for things like index blocks, will stay in the PG buffer \ncache, while being evicted from the OS. And now you've got a win over \nthe situation where you'd have used a smaller buffer cache. A typical \nOS buffering scheme will not quite be smart enough to prioritize those \nblocks over the rest so that they are likely to stay there.\n\nSo for any given system, the question is whether the gain in performance \nfrom buffers that get a high usage count and stay there, something you \nonly get from the PG buffer cache, outweighs the overhead of the \ndouble-buffering that shows up in order to reach that state. If you \noversize the buffer cache, and look inside it with pg_buffercache \nconsidering the usage count distribution, you can actually estimate how \nlikely that is to be true.\n\n\n> If that is the case what are the downsides to having less overlap\n> between the caches, IE heavily favouring one or the other, such as\n> allocating shared_buffers to a much larger percentage (such as 90-95%\n> of expected 'free' memory).\n> \n\nGiving all the buffers to the database doesn't work for many reasons:\n-Need a bunch leftover for clients to use (i.e. work_mem)\n-Won't be enough OS cache for non-buffer data the database expects \ncached reads and writes will perform well onto (some of the non-database \nfiles it uses)\n-Database checkpoints will turn into a nightmare, because there will be \nso much more dirty data that could have been spooled regularly out to \nthe OS and then to disk by backends that doesn't ever happen.\n-Not having enough writes for buffering backend writes means less chanes \nto do write combining and elevator seek sorting, which means average I/O \nwill drop.\n\nThe alternate idea is to make shared_buffers small. I see people \nhappilly running away in the 128MB - 256MB range sometimes. The benefit \nover just using the default of <32MB is obvious, but you're already past \na good bit of the diminishing marginal returns just by the 8X increase. \n\nImproves keep coming as shared_buffers cache size increases for many \nworkloads, but eventually you can expect to go to far if you try to push \neverything in there. Only question is whether that happens at 40%, 60%, \nor something higher.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 11 Mar 2010 03:39:59 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On 11 March 2010 16:16, Ben Chobot <[email protected]> wrote:\n\n> I *can* say a 10GB shared_buffer value is working \"well\" with my 128GB of RAM..... whether or not it's \"optimal,\" I couldn't say without a lot of experimentation I can't afford to do right now. You might have a look at the pg_buffercache contrib module. It can tell you how utilized your shared buffers are.\n\nThanks Ben and Greg,\n\nI shall start with something relatively sane (such as 10GB) and then\nsee how we go from there.\n\nOnce this server has brought online and bedded in I will be updating\nour other three servers which are identical in hardware spec and all\nhave the same replicated data so I'll be able to do some real world\ntests with different settings withn the same load.\n\n(Currently one is currently running postgresql 8.1 on 32bit OS under a\nVM, the other two running 8.3 on 64bit OS with 64gig of memory but\nwith Postgres still tuned for the 8 gigs the servers originally had\nand under a VM).\n\nPaul\n",
"msg_date": "Fri, 12 Mar 2010 11:19:38 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Mar 11, 2010, at 12:39 AM, Greg Smith wrote:\n\n> \n> Giving all the buffers to the database doesn't work for many reasons:\n> -Need a bunch leftover for clients to use (i.e. work_mem)\n> -Won't be enough OS cache for non-buffer data the database expects \n> cached reads and writes will perform well onto (some of the non-database \n> files it uses)\n> -Database checkpoints will turn into a nightmare, because there will be \n> so much more dirty data that could have been spooled regularly out to \n> the OS and then to disk by backends that doesn't ever happen.\n> -Not having enough writes for buffering backend writes means less chanes \n> to do write combining and elevator seek sorting, which means average I/O \n> will drop.\n> \n> The alternate idea is to make shared_buffers small. I see people \n> happilly running away in the 128MB - 256MB range sometimes. The benefit \n> over just using the default of <32MB is obvious, but you're already past \n> a good bit of the diminishing marginal returns just by the 8X increase. \n> \n\nThe DB usage pattern influences this sort of decision too. One that does large bulk inserts can prefer larger shared buffers, provided its bg_writer is tuned well (10GB - 16GB for a 64GB server). \nTemp table usage benefits from it as well -- I believe that one created as \"ON COMMIT DROP\" has a better chance of not being written to the data disk before being dropped with more work_mem.\nIf you have a mixed read workload that has occasional very large sequential scans, you will want to make sure shared_buffers is large enough to hold the most important index and randomly accessed data.\n\nLinux is more sensitive to letting sequential scans kick out data from page cache than Postgres.\n\n\n----------\nLastly, a word of caution on Linux. Before the recent changes to memory accounting and paging (~ kernel 2.28 ish?). Shared_buffers are only accounted for in part of the equations for paging. On one hand, the system sees shared memory as available to be swapped out (even though it won't) and on the other hand it senses memory pressure from it. So if you for example, set shared_mem to 75% of your RAM the system will completely freak out and kswapd and other processes will go through long periods of 100% CPU utilization. \nAn example:\n32GB RAM, 16GB shared_buffers, CentOS 5.4:\nWith the default os 'swappiness' of '60' the system will note that less than 60% is used by pagecache and favor swapping out postgres backends aggressively. If either by turning down the swappiness or opening enough processes to consume more RAM on the system (to ~ 80% or so) the kernel will start spending a LOT of CPU, often minutes at a time, trying to free up memory. From my understanding, it will keep searching the postgres shared_buffers space for pages to swap out even though it can't do so. So for example, there might be 16GB shared mem (which it won't page out), 10GB other process memory, and 6GB actual cahced files in page cache. It sees the ratio of 6GB files to 26GB processes and heavily favors attacking the 26GB -- but scans the whole set of process memory and finds all pages are recently used or can't be paged out.\n\nAnyhow, the latest linux kernels claim to fix this, and Solaris/OpenSolaris or BSD's don't have this problem. On OpenSolaris there are some benchmarks out there that showing that 90% of memory allocated to shared_buffers can work well. On Linux, that is dangerous. Combine the poor memory management when there is a lot of shared memory with the fact that 50% is bad for double-buffering, and the Linux suggestion becomes the typical 'at least 128MB, but never more than 25% of RAM'.\n\n\n> Improves keep coming as shared_buffers cache size increases for many \n> workloads, but eventually you can expect to go to far if you try to push \n> everything in there. Only question is whether that happens at 40%, 60%, \n> or something higher.\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 15 Mar 2010 10:58:16 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
},
{
"msg_contents": "On Thu, Mar 11, 2010 at 5:19 PM, Paul McGarry <[email protected]> wrote:\n> On 11 March 2010 16:16, Ben Chobot <[email protected]> wrote:\n>\n>> I *can* say a 10GB shared_buffer value is working \"well\" with my 128GB of RAM..... whether or not it's \"optimal,\" I couldn't say without a lot of experimentation I can't afford to do right now. You might have a look at the pg_buffercache contrib module. It can tell you how utilized your shared buffers are.\n>\n> Thanks Ben and Greg,\n>\n> I shall start with something relatively sane (such as 10GB) and then\n> see how we go from there.\n>\n> Once this server has brought online and bedded in I will be updating\n> our other three servers which are identical in hardware spec and all\n> have the same replicated data so I'll be able to do some real world\n> tests with different settings withn the same load.\n>\n> (Currently one is currently running postgresql 8.1 on 32bit OS under a\n> VM, the other two running 8.3 on 64bit OS with 64gig of memory but\n> with Postgres still tuned for the 8 gigs the servers originally had\n> and under a VM).\n\nDefinitely look at lowering the swappiness setting. On a db server I\ngo for a swappiness of 1\n",
"msg_date": "Mon, 15 Mar 2010 14:01:02 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers advice"
}
] |
[
{
"msg_contents": "Hi,\n\nI am the beginner of pgpsql\n\n\nI need to select from two tables say T1,T2 by UNION\n\nwhen the value is from T1 the output should by a 1 and\nwhen the value from T2 the out put should be 2\n\nwhen T1 Hits values should be 1 and when T2 hits the value should be 2\n\n\nor when we use\nCOALESCE(is exits value, or this value)\n\ni need\nsay_some_function (if value display,1)\n\nresults should be as\n1\n1\n2\n2\n1\n1\n\nIs it possible Can any one help me...\n\nThanks & Regards\nAK\n\nHi,I am the beginner of pgpsql I need to select from two tables say T1,T2 by UNION when the value is from T1 the output should by a 1 and when the value from T2 the out put should be 2 \nwhen T1 Hits values should be 1 and when T2 hits the value should be 2or when we use COALESCE(is exits value, or this value) i need say_some_function (if value display,1) results should be as \n\n112211Is it possible Can any one help me... Thanks & Regards AK",
"msg_date": "Thu, 11 Mar 2010 12:26:02 +0530",
"msg_from": "Angayarkanni <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to SELECT"
},
{
"msg_contents": "In response to Angayarkanni :\n> Hi,\n> \n> I am the beginner of pgpsql\n> \n> �\n> I need to select from two tables say T1,T2 by UNION\n> \n> when the value is from T1 the output should by a 1 and\n> when the value from T2 the out put should be 2\n> \n> when T1 Hits values should be 1 and when T2 hits the value should be 2\n\nselect 'table 1' as table, * from t1 union all select 'table 2', * from t2;\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 11 Mar 2010 08:14:17 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to SELECT"
},
{
"msg_contents": "Hi there\n\nThis list is for performance tuning questions related to PostgreSQL ... your\nquestion is a general SQL syntax issue. Also, it's not quite clear from your\nmessage exactly what you are trying to do - it's better to post example\ntable schemas.\n\nAt a guess, I think you might want:\n\nselect 1, * from T1\nunion all\nselect 2, * from T2\n\nCheers\nDave\n\nOn Thu, Mar 11, 2010 at 12:56 AM, Angayarkanni <[email protected]>wrote:\n\n> Hi,\n>\n> I am the beginner of pgpsql\n>\n>\n> I need to select from two tables say T1,T2 by UNION\n>\n> when the value is from T1 the output should by a 1 and\n> when the value from T2 the out put should be 2\n>\n> when T1 Hits values should be 1 and when T2 hits the value should be 2\n>\n>\n> or when we use\n> COALESCE(is exits value, or this value)\n>\n> i need\n> say_some_function (if value display,1)\n>\n> results should be as\n> 1\n> 1\n> 2\n> 2\n> 1\n> 1\n>\n> Is it possible Can any one help me...\n>\n> Thanks & Regards\n> AK\n>\n>\n\nHi thereThis list is for performance tuning questions related to PostgreSQL ... your question is a general SQL syntax issue. Also, it's not quite clear from your message exactly what you are trying to do - it's better to post example table schemas.\nAt a guess, I think you might want:select 1, * from T1union allselect 2, * from T2CheersDaveOn Thu, Mar 11, 2010 at 12:56 AM, Angayarkanni <[email protected]> wrote:\nHi,I am the beginner of pgpsql I need to select from two tables say T1,T2 by UNION \nwhen the value is from T1 the output should by a 1 and when the value from T2 the out put should be 2 \nwhen T1 Hits values should be 1 and when T2 hits the value should be 2or when we use COALESCE(is exits value, or this value) i need say_some_function (if value display,1) results should be as \n\n\n112211Is it possible Can any one help me... Thanks & Regards AK",
"msg_date": "Thu, 11 Mar 2010 01:15:10 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to SELECT"
},
{
"msg_contents": "Yes I got Yaar\n\nThanks a lot !!! (Dave Crooke,A. Kretschmer )\n\n\n\n\nRegards,\nAngayarkanni Kajendran\n\n\n\nOn Thu, Mar 11, 2010 at 12:44 PM, A. Kretschmer <\[email protected]> wrote:\n\n> In response to Angayarkanni :\n> > Hi,\n> >\n> > I am the beginner of pgpsql\n> >\n> >\n> > I need to select from two tables say T1,T2 by UNION\n> >\n> > when the value is from T1 the output should by a 1 and\n> > when the value from T2 the out put should be 2\n> >\n> > when T1 Hits values should be 1 and when T2 hits the value should be 2\n>\n> select 'table 1' as table, * from t1 union all select 'table 2', * from t2;\n>\n>\n> Regards, Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYes I got Yaar Thanks a lot !!! (Dave Crooke,A. Kretschmer )\nRegards,\n\nAngayarkanni Kajendran \nOn Thu, Mar 11, 2010 at 12:44 PM, A. Kretschmer <[email protected]> wrote:\n\nIn response to Angayarkanni :\n> Hi,\n>\n> I am the beginner of pgpsql\n>\n> \n> I need to select from two tables say T1,T2 by UNION\n>\n> when the value is from T1 the output should by a 1 and\n> when the value from T2 the out put should be 2\n>\n> when T1 Hits values should be 1 and when T2 hits the value should be 2\n\nselect 'table 1' as table, * from t1 union all select 'table 2', * from t2;\n\n\nRegards, Andreas\n--\nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 11 Mar 2010 12:53:26 +0530",
"msg_from": "Angayarkanni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to SELECT"
}
] |
[
{
"msg_contents": "Dear performance users,\n\nI am writing on the mailing list, after several unsuccessful attemts to \ncontact mailing list administrators on \"-owner\" e-mail addresses.\n\nNamely, it seems that it is not possible to subscribe to ANY Postgres \nmailing list, using e-mail addresses from newly created TLD-s (e.g. \".rs\").\n\nIn my case, the e-mail address I'm using (ognjen at etf/bg/ac/YU) is \nabout to expire in 15 days. My new address (that I'm using on other \nmailing lists - ognjen at etf/bg/ac/RS) is not accepted by postgres \nmailing lists manager (Majordomo).\n\nI already contacted Majordomo developers and thay exmplained that the \nproblem is easaly solved with simple changes of default configuration of \nthe mailing list manager. To cut a long story short, I am willing to \nexplain it in detail to mailing list administrator, if I ever find one. :)\n\nPlease help me finding the right person for this problem.\n\nRegards,\nOgnjen Blagojevic\n",
"msg_date": "Fri, 12 Mar 2010 10:54:27 +0100",
"msg_from": "Ognjen Blagojevic <[email protected]>",
"msg_from_op": true,
"msg_subject": "[offtopic] Problems subscribing to Postgres mailing lists"
},
{
"msg_contents": "\nForwarding this to the www list. Seems .rs email addresses cannot be\nused for email subscriptions.\n\n---------------------------------------------------------------------------\n\nOgnjen Blagojevic wrote:\n> Dear performance users,\n> \n> I am writing on the mailing list, after several unsuccessful attemts to \n> contact mailing list administrators on \"-owner\" e-mail addresses.\n> \n> Namely, it seems that it is not possible to subscribe to ANY Postgres \n> mailing list, using e-mail addresses from newly created TLD-s (e.g. \".rs\").\n> \n> In my case, the e-mail address I'm using (ognjen at etf/bg/ac/YU) is \n> about to expire in 15 days. My new address (that I'm using on other \n> mailing lists - ognjen at etf/bg/ac/RS) is not accepted by postgres \n> mailing lists manager (Majordomo).\n> \n> I already contacted Majordomo developers and thay exmplained that the \n> problem is easaly solved with simple changes of default configuration of \n> the mailing list manager. To cut a long story short, I am willing to \n> explain it in detail to mailing list administrator, if I ever find one. :)\n> \n> Please help me finding the right person for this problem.\n> \n> Regards,\n> Ognjen Blagojevic\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n",
"msg_date": "Mon, 15 Mar 2010 18:08:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] [offtopic] Problems subscribing to Postgres\n\tmailing lists"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Forwarding this to the www list. Seems .rs email addresses cannot be\n> used for email subscriptions.\n\nYou really need to copy Marc for this to go anywhere.\n\n> ---------------------------------------------------------------------------\n> \n> Ognjen Blagojevic wrote:\n> > Dear performance users,\n> > \n> > I am writing on the mailing list, after several unsuccessful attemts to \n> > contact mailing list administrators on \"-owner\" e-mail addresses.\n> > \n> > Namely, it seems that it is not possible to subscribe to ANY Postgres \n> > mailing list, using e-mail addresses from newly created TLD-s (e.g. \".rs\").\n> > \n> > In my case, the e-mail address I'm using (ognjen at etf/bg/ac/YU) is \n> > about to expire in 15 days. My new address (that I'm using on other \n> > mailing lists - ognjen at etf/bg/ac/RS) is not accepted by postgres \n> > mailing lists manager (Majordomo).\n> > \n> > I already contacted Majordomo developers and thay exmplained that the \n> > problem is easaly solved with simple changes of default configuration of \n> > the mailing list manager. To cut a long story short, I am willing to \n> > explain it in detail to mailing list administrator, if I ever find one. :)\n> > \n> > Please help me finding the right person for this problem.\n> > \n> > Regards,\n> > Ognjen Blagojevic\n> > \n> > -- \n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n> -- \n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n> \n> -- \n> Sent via pgsql-www mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-www\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 15 Mar 2010 19:15:02 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] [offtopic] Problems subscribing to\n\tPostgres mailing lists"
},
{
"msg_contents": "\nOdd, we don't do any TLD blocking that I'm aware of, only domain blocking \nfor domains that have generated, in the past, alot of spam ...\n\nOgnjen ... what domain are you trying to su bscribe from?\n\n\n\nOn Mon, 15 Mar 2010, Alvaro Herrera wrote:\n\n> Bruce Momjian wrote:\n>>\n>> Forwarding this to the www list. Seems .rs email addresses cannot be\n>> used for email subscriptions.\n>\n> You really need to copy Marc for this to go anywhere.\n>\n>> ---------------------------------------------------------------------------\n>>\n>> Ognjen Blagojevic wrote:\n>>> Dear performance users,\n>>>\n>>> I am writing on the mailing list, after several unsuccessful attemts to\n>>> contact mailing list administrators on \"-owner\" e-mail addresses.\n>>>\n>>> Namely, it seems that it is not possible to subscribe to ANY Postgres\n>>> mailing list, using e-mail addresses from newly created TLD-s (e.g. \".rs\").\n>>>\n>>> In my case, the e-mail address I'm using (ognjen at etf/bg/ac/YU) is\n>>> about to expire in 15 days. My new address (that I'm using on other\n>>> mailing lists - ognjen at etf/bg/ac/RS) is not accepted by postgres\n>>> mailing lists manager (Majordomo).\n>>>\n>>> I already contacted Majordomo developers and thay exmplained that the\n>>> problem is easaly solved with simple changes of default configuration of\n>>> the mailing list manager. To cut a long story short, I am willing to\n>>> explain it in detail to mailing list administrator, if I ever find one. :)\n>>>\n>>> Please help me finding the right person for this problem.\n>>>\n>>> Regards,\n>>> Ognjen Blagojevic\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>> --\n>> Bruce Momjian <[email protected]> http://momjian.us\n>> EnterpriseDB http://enterprisedb.com\n>>\n>> PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n>>\n>> --\n>> Sent via pgsql-www mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-www\n>\n>\n> -- \n> Alvaro Herrera http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n\n----\nMarc G. Fournier Hub.Org Hosting Solutions S.A.\[email protected] http://www.hub.org\n\nYahoo:yscrappy Skype: hub.org ICQ:7615664 MSN:[email protected]\n",
"msg_date": "Mon, 15 Mar 2010 22:23:53 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] [offtopic] Problems subscribing to\n\tPostgres mailing lists"
},
{
"msg_contents": "It's in his post... relevant part snipped below. ...Robert\n\nOn Mon, Mar 15, 2010 at 9:23 PM, Marc G. Fournier <[email protected]> wrote:\n> Ognjen ... what domain are you trying to su bscribe from?\n>\n> On Mon, 15 Mar 2010, Alvaro Herrera wrote:\n>>>> Namely, it seems that it is not possible to subscribe to ANY Postgres\n>>>> mailing list, using e-mail addresses from newly created TLD-s (e.g.\n>>>> \".rs\").\n>>>>\n>>>> In my case, the e-mail address I'm using (ognjen at etf/bg/ac/YU) is\n>>>> about to expire in 15 days. My new address (that I'm using on other\n>>>> mailing lists - ognjen at etf/bg/ac/RS) is not accepted by postgres\n>>>> mailing lists manager (Majordomo).\n",
"msg_date": "Mon, 15 Mar 2010 23:13:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] [offtopic] Problems subscribing to\n\tPostgres mailing lists"
},
{
"msg_contents": "\nD'oh ...\n\nOgnjen ... can you please send a subscribe request in followed by an email \ndirectly to my account ([email protected])? I've checked the config files, \nand there is nothing explicitly blocking any TLDs that I can see, so am \nwondering if maybe one of the RBLs is trapping you .. wish to check the \nlog file(s) for the subscribe attempt ...\n\nThanks ...\n\nOn Mon, 15 Mar 2010, Robert Haas wrote:\n\n> It's in his post... relevant part snipped below. ...Robert\n>\n> On Mon, Mar 15, 2010 at 9:23 PM, Marc G. Fournier <[email protected]> wrote:\n>> Ognjen ... what domain are you trying to su bscribe from?\n>>\n>> On Mon, 15 Mar 2010, Alvaro Herrera wrote:\n>>>>> Namely, it seems that it is not possible to subscribe to ANY Postgres\n>>>>> mailing list, using e-mail addresses from newly created TLD-s (e.g.\n>>>>> \".rs\").\n>>>>>\n>>>>> In my case, the e-mail address I'm using (ognjen at etf/bg/ac/YU) is\n>>>>> about to expire in 15 days. My new address (that I'm using on other\n>>>>> mailing lists - ognjen at etf/bg/ac/RS) is not accepted by postgres\n>>>>> mailing lists manager (Majordomo).\n>\n\n----\nMarc G. Fournier Hub.Org Hosting Solutions S.A.\[email protected] http://www.hub.org\n\nYahoo:yscrappy Skype: hub.org ICQ:7615664 MSN:[email protected]\n",
"msg_date": "Tue, 16 Mar 2010 00:32:44 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] [offtopic] Problems subscribing to\n\tPostgres mailing lists"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using dblink to read data from a remote data base, insert these data in\nthe local database, update the red data in the remote database then continue\nto do some other work on the local database in the same transaction.\n\nMy question is : Is db link transactional; If the local transaction failed,\nwould the update in the remote data base roll back or if the update in the\nremote data base failed, would the insert in the local data base roll back. \n\nIf not, is there a way to make db link \"transactional\"?\n\nThanks\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi,\nI am using dblink to read data from a remote data base,\ninsert these data in the local database, update the red data in the remote\ndatabase then continue to do some other work on the local database in the same\ntransaction.\nMy question is : Is db link transactional; If the local transaction\nfailed, would the update in the remote data base roll back or if the update in\nthe remote data base failed, would the insert in the local data base roll back.\n\nIf not, is there a way to make db link “transactional”?\nThanks",
"msg_date": "Fri, 12 Mar 2010 17:27:47 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is DBLINK transactional"
},
{
"msg_contents": "On Fri, Mar 12, 2010 at 10:27 AM, elias ghanem <[email protected]> wrote:\n> Hi,\n>\n> I am using dblink to read data from a remote data base, insert these data in\n> the local database, update the red data in the remote database then continue\n> to do some other work on the local database in the same transaction.\n>\n> My question is : Is db link transactional; If the local transaction failed,\n> would the update in the remote data base roll back or if the update in the\n> remote data base failed, would the insert in the local data base roll back.\n>\n> If not, is there a way to make db link “transactional”?\n\nof course. You can always explicitly open a transaction on the remote\nside over dblink, do work, and commit it at the last possible moment.\nYour transactions aren't perfectly synchronized...if you crash in the\nprecise moment between committing the remote and the local you can get\nin trouble. The chances of this are extremely remote though.\n\nmerlin\n",
"msg_date": "Fri, 12 Mar 2010 12:07:06 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is DBLINK transactional"
},
{
"msg_contents": "On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:\n> of course. You can always explicitly open a transaction on the remote\n> side over dblink, do work, and commit it at the last possible moment.\n> Your transactions aren't perfectly synchronized...if you crash in the\n> precise moment between committing the remote and the local you can get\n> in trouble. The chances of this are extremely remote though.\n\nIf you want a better guarantee than that, consider using 2PC.\n\nThe problem with things that are \"extremely remote\" possibilities are\nthat they tend to be less remote than we expect ;)\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 12 Mar 2010 13:54:17 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is DBLINK transactional"
},
{
"msg_contents": "On 13/03/2010 5:54 AM, Jeff Davis wrote:\n> On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:\n>> of course. You can always explicitly open a transaction on the remote\n>> side over dblink, do work, and commit it at the last possible moment.\n>> Your transactions aren't perfectly synchronized...if you crash in the\n>> precise moment between committing the remote and the local you can get\n>> in trouble. The chances of this are extremely remote though.\n>\n> If you want a better guarantee than that, consider using 2PC.\n\nTranslation in case you don't know: 2PC = two phase commit.\n\nNote that you have to monitor \"lost\" transactions that were prepared for \ncommit then abandoned by the controlling app and periodically get rid of \nthem or you'll start having issues.\n\n> The problem with things that are \"extremely remote\" possibilities are\n> that they tend to be less remote than we expect ;)\n\n... and they know just when they can happen despite all the odds to \nmaximise the pain and chaos caused.\n\n--\nCraig Ringer\n",
"msg_date": "Sat, 13 Mar 2010 20:10:49 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is DBLINK transactional"
},
{
"msg_contents": "On Sat, 2010-03-13 at 20:10 +0800, Craig Ringer wrote:\n> On 13/03/2010 5:54 AM, Jeff Davis wrote:\n> > On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:\n> >> of course. You can always explicitly open a transaction on the remote\n> >> side over dblink, do work, and commit it at the last possible moment.\n> >> Your transactions aren't perfectly synchronized...if you crash in the\n> >> precise moment between committing the remote and the local you can get\n> >> in trouble. The chances of this are extremely remote though.\n> >\n> > If you want a better guarantee than that, consider using 2PC.\n> \n> Translation in case you don't know: 2PC = two phase commit.\n> \n> Note that you have to monitor \"lost\" transactions that were prepared for \n> commit then abandoned by the controlling app and periodically get rid of \n> them or you'll start having issues.\n\nAnd you still have the problem of committing one 2PC transaction and\nthen crashing before committing the other and then crashing the\ntransaction monitor before being able to record what crashed :P, though\nthis possibility is even more remote than just crashing between the 2\noriginal commits (dblink and local).\n\nTo get around this fundamental problem, you can actually do async queues\nand remember, what got replayed on the remote side, so if you have\ncrashes on either side, you can simply replay again.\n\n> > The problem with things that are \"extremely remote\" possibilities are\n> > that they tend to be less remote than we expect ;)\n> \n> ... and they know just when they can happen despite all the odds to \n> maximise the pain and chaos caused.\n> \n> --\n> Craig Ringer\n> \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Sat, 13 Mar 2010 14:18:55 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is DBLINK transactional"
},
{
"msg_contents": "[email protected] (Craig Ringer) writes:\n\n> On 13/03/2010 5:54 AM, Jeff Davis wrote:\n>> On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:\n>>> of course. You can always explicitly open a transaction on the remote\n>>> side over dblink, do work, and commit it at the last possible moment.\n>>> Your transactions aren't perfectly synchronized...if you crash in the\n>>> precise moment between committing the remote and the local you can get\n>>> in trouble. The chances of this are extremely remote though.\n>>\n>> If you want a better guarantee than that, consider using 2PC.\n>\n> Translation in case you don't know: 2PC = two phase commit.\n>\n> Note that you have to monitor \"lost\" transactions that were prepared\n> for commit then abandoned by the controlling app and periodically get\n> rid of them or you'll start having issues.\n\nThere can be issues even if they're not abandoned...\n\nNote that prepared transactions establish, and maintain, until removed,\nall the appropriate locks on the underlying tables and tuples.\n\nAs a consequence, maintenance-related activities may be somewhat\nsurprisingly affected.\n\nfoo=# begin; set transaction isolation level serializable;\nBEGIN\nSET\nfoo=# insert into my_table (date_time, hostname, duration, diag) values (now(), 'foo', 1, 2);\nINSERT 0 1\nfoo=# prepare transaction 'foo';\nPREPARE TRANSACTION\n\n[then, I quit the psql session...]\n\nfoo=# select * from pg_locks where relation = (select oid from pg_class where relname = 'my_table');\n-[ RECORD 1 ]------+-----------------\nlocktype | relation\ndatabase | 308021\nrelation | 308380\npage |\ntuple |\nvirtualxid |\ntransactionid |\nclassid |\nobjid |\nobjsubid |\nvirtualtransaction | -1/433653\npid |\nmode | RowExclusiveLock\ngranted | t\n\nIf I try to truncate the table...\n\nfoo=# truncate my_table;\n[hangs, waiting on the lock...]\n\n[looking at another session...]\n\nfoo=# select * from pg_locks where relation = (select oid from pg_class where relname = 'my_table');\n-[ RECORD 1 ]------+--------------------\nlocktype | relation\ndatabase | 308021\nrelation | 308380\npage |\ntuple |\nvirtualxid |\ntransactionid |\nclassid |\nobjid |\nobjsubid |\nvirtualtransaction | -1/433653\npid |\nmode | RowExclusiveLock\ngranted | t\n-[ RECORD 2 ]------+--------------------\nlocktype | relation\ndatabase | 308021\nrelation | 308380\npage |\ntuple |\nvirtualxid |\ntransactionid |\nclassid |\nobjid |\nobjsubid |\nvirtualtransaction | 2/13\npid | 3749\nmode | AccessExclusiveLock\ngranted | f\n\nImmediately upon submitting \"commit prepared 'foo';\", both locks are\nresolved quite quickly.\n\n>> The problem with things that are \"extremely remote\" possibilities are\n>> that they tend to be less remote than we expect ;)\n>\n> ... and they know just when they can happen despite all the odds to\n> maximise the pain and chaos caused.\n\nA lot of these kinds of things only come up as race conditions. The\ntrouble is that a lot of races do wind up synchronizing themselves.\n\nIn sporting events, this is intended and desired; an official fires the\nstarter pistol or activates the horn, or what have you, with the\nintended result that athletes begin very nearly simultaneously. And at\nthe end of Olympic races, their times frequently differ only by\nminiscule intervals.\n\nIn my example up above, there's a possibly unexpected synchronization\npoint; the interweaving of the PREPARE TRANSACTION and TRUNCATE requests\nlead to a complete lock against the table. Supposing 15 processes then\ntry accessing that table, they'll be blocked until the existing locks\nget closed out. Which takes place the very instant after the COMMIT\nPREPARED request comes in. At that moment, 15 \"racers\" are released\nvery nearly simultaneously.\n\nIf there is any further mischief to be had in the race, well, they're\nset up to tickle it...\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/nonrdbms.html\n\"Barf, what is all this prissy pedantry? Groups, modules, rings,\nufds, patent-office algebra. Barf!\" -- R. William Gosper\n",
"msg_date": "Mon, 15 Mar 2010 11:40:31 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is DBLINK transactional"
}
] |
[
{
"msg_contents": "Hi all,\n\nmy posting on 2010-01-14 about the performance when writing\nbytea to disk caused a longer discussion. While the fact\nstill holds that the overall postgresql write performance is\nroughly 25% of the serial I/O disk performance this was\ncompensated for my special use case here by doing some other\nnon-postgresql related things in parallel.\n\nNow I cannot optimize my processes any further, however, now\nI am facing another quite unexpected performance issue:\nDeleting rows from my simple table (with the bytea column)\nhaving 16 MB data each, takes roughly as long as writing\nthem!\n\nLittle more detail:\n\n* The table just has 5 unused int columns, a timestamp,\nOIDs, and the bytea column, no indices; the bytea storage\ntype is 'extended', the 16 MB are compressed to approx. the\nhalf.\n\n* All the usual optimizations are done to reach better\nwrite through (pg_xlog on another disk, much tweaks to the\nserver conf etc), however, this does not matter here, since\nnot the absolute performance is of interest here but the\nfact that deleting roughly takes 100% of the writing time.\n\n* I need to write 15 rows of 16 MB each to disk in a maximum\ntime of 15 s, which is performed here in roughly 10 seconds,\nhowever, now I am facing the problem that keeping my\ndatabase tidy (deleting rows) takes another 5-15 s (10s on\naverage), so my process exceeds the maximum time of 15s for\nabout 5s.\n\n* Right now I am deleting like this:\n\nDELETE FROM table WHERE (CURRENT_TIMESTAMP -\nmy_timestamp_column) > interval '2 minutes';\n\nwhile it is planned to have the interval set to 6 hours in\nthe final version (thus creating a FIFO buffer for the\nlatest 6 hours of inserted data; so the FIFO will keep\napprox. 10.000 rows spanning 160-200 GB data).\n\n* This deletion SQL command was simply repeatedly executed\nby pgAdmin while my app kept adding the 16 MB rows.\n\n* Autovacuum is on; I believe I need to keep it on,\notherwise I do not free the disk space, right? If I switch\nit off, the deletion time reduces from the average 10s down\nto 4s.\n\n* I am using server + libpq version 8.2.4, currently on\nWinXP. Will an upgrade to 8.4 help here?\n\nDo you have any other ideas to help me out?\nOh, please...\n\nThank You\n Felix\n\n\n\n\n",
"msg_date": "Sat, 13 Mar 2010 22:17:42 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "Hi there\n\nI'm not an expert on PG's \"toast\" system, but a couple of thoughts inline\nbelow.\n\nCheers\nDave\n\nOn Sat, Mar 13, 2010 at 3:17 PM, [email protected] <\[email protected]> wrote:\n\n> Hi all,\n>\n> my posting on 2010-01-14 about the performance when writing\n> bytea to disk caused a longer discussion. While the fact\n> still holds that the overall postgresql write performance is\n> roughly 25% of the serial I/O disk performance this was\n> compensated for my special use case here by doing some other\n> non-postgresql related things in parallel.\n>\n> Now I cannot optimize my processes any further, however, now\n> I am facing another quite unexpected performance issue:\n> Deleting rows from my simple table (with the bytea column)\n> having 16 MB data each, takes roughly as long as writing\n> them!\n>\n> Little more detail:\n>\n> * The table just has 5 unused int columns, a timestamp,\n> OIDs, and the bytea column, *no indices*; the bytea storage\n> type is 'extended', the 16 MB are compressed to approx. the\n> half.\n>\n\nWhy no indices?\n\n\n>\n> * All the usual optimizations are done to reach better\n> write through (pg_xlog on another disk, much tweaks to the\n> server conf etc), however, this does not matter here, since\n> not the absolute performance is of interest here but the\n> fact that deleting roughly takes 100% of the writing time.\n>\n> * I need to write 15 rows of 16 MB each to disk in a maximum\n> time of 15 s, which is performed here in roughly 10 seconds,\n> however, now I am facing the problem that keeping my\n> database tidy (deleting rows) takes another 5-15 s (10s on\n> average), so my process exceeds the maximum time of 15s for\n> about 5s.\n>\n> * Right now I am deleting like this:\n>\n> DELETE FROM table WHERE (CURRENT_TIMESTAMP -\n> my_timestamp_column) > interval '2 minutes';\n>\n\nYou *need* an index on my_timestamp_column\n\n\n>\n> while it is planned to have the interval set to 6 hours in\n> the final version (thus creating a FIFO buffer for the\n> latest 6 hours of inserted data; so the FIFO will keep\n> approx. 10.000 rows spanning 160-200 GB data).\n>\n\nThat's not the way to keep a 6 hour rolling buffer ... what you need to do\nis run the delete frequently, with *interval '6 hours'* in the SQL acting\nas the cutoff.\n\nIf you really do want to drop the entire table contents before refilling it,\ndo a *DROP TABLE* and recreate it.\n\n\n> * This deletion SQL command was simply repeatedly executed\n> by pgAdmin while my app kept adding the 16 MB rows.\n>\n\nAre you sure you are timing the delete, and not pgAdmin re-populating some\nkind of buffer?\n\n\n>\n> * Autovacuum is on; I believe I need to keep it on,\n> otherwise I do not free the disk space, right? If I switch\n> it off, the deletion time reduces from the average 10s down\n> to 4s.\n>\n\nYou may be running autovaccum too aggressively, it may be interfering with\nI/O to the tables.\n\nPostgres vacuuming does not free disk space (in the sense of returning it to\nthe OS), it removes old versions of rows that have been UPDATEd or DELETEd\nand makes that space in the table file available for new writes.\n\n\n> * I am using server + libpq version 8.2.4, currently on\n> WinXP. Will an upgrade to 8.4 help here?\n>\n\n8.4 has a lot of performance improvements. It's definitely worth a shot. I'd\nalso consider switching to another OS where you can use a 64-bit version of\nPG and a much bigger buffer cache.\n\n\n> Do you have any other ideas to help me out?\n> Oh, please...\n>\n> Thank You\n> Felix\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi thereI'm not an expert on PG's \"toast\" system, but a couple of thoughts inline below.CheersDaveOn Sat, Mar 13, 2010 at 3:17 PM, [email protected] <[email protected]> wrote:\nHi all,\n\nmy posting on 2010-01-14 about the performance when writing\nbytea to disk caused a longer discussion. While the fact\nstill holds that the overall postgresql write performance is\nroughly 25% of the serial I/O disk performance this was\ncompensated for my special use case here by doing some other\nnon-postgresql related things in parallel.\n\nNow I cannot optimize my processes any further, however, now\nI am facing another quite unexpected performance issue:\nDeleting rows from my simple table (with the bytea column)\nhaving 16 MB data each, takes roughly as long as writing\nthem!\n\nLittle more detail:\n\n* The table just has 5 unused int columns, a timestamp,\nOIDs, and the bytea column, no indices; the bytea storage\ntype is 'extended', the 16 MB are compressed to approx. the\nhalf. Why no indices? \n\n* All the usual optimizations are done to reach better\nwrite through (pg_xlog on another disk, much tweaks to the\nserver conf etc), however, this does not matter here, since\nnot the absolute performance is of interest here but the\nfact that deleting roughly takes 100% of the writing time.\n\n* I need to write 15 rows of 16 MB each to disk in a maximum\ntime of 15 s, which is performed here in roughly 10 seconds,\nhowever, now I am facing the problem that keeping my\ndatabase tidy (deleting rows) takes another 5-15 s (10s on\naverage), so my process exceeds the maximum time of 15s for\nabout 5s.\n\n* Right now I am deleting like this:\n\nDELETE FROM table WHERE (CURRENT_TIMESTAMP -\nmy_timestamp_column) > interval '2 minutes';You need an index on my_timestamp_column \n\nwhile it is planned to have the interval set to 6 hours in\nthe final version (thus creating a FIFO buffer for the\nlatest 6 hours of inserted data; so the FIFO will keep\napprox. 10.000 rows spanning 160-200 GB data).That's not the way to keep a 6 hour rolling buffer ... what you need to do is run the delete frequently, with interval '6 hours' in the SQL acting as the cutoff.\nIf you really do want to drop the entire table contents before refilling it, do a DROP TABLE and recreate it.\n\n* This deletion SQL command was simply repeatedly executed\nby pgAdmin while my app kept adding the 16 MB rows.Are you sure you are timing the delete, and not pgAdmin re-populating some kind of buffer? \n\n* Autovacuum is on; I believe I need to keep it on,\notherwise I do not free the disk space, right? If I switch\nit off, the deletion time reduces from the average 10s down\nto 4s.You may be running autovaccum too aggressively, it may be interfering with I/O to the tables. Postgres vacuuming does not free disk space (in the sense of returning it to the OS), it removes old versions of rows that have been UPDATEd or DELETEd and makes that space in the table file available for new writes.\n\n\n* I am using server + libpq version 8.2.4, currently on\nWinXP. Will an upgrade to 8.4 help here?8.4 has a lot of performance improvements. It's definitely worth a shot. I'd also consider switching to another OS where you can use a 64-bit version of PG and a much bigger buffer cache.\n\n\nDo you have any other ideas to help me out?\nOh, please...\n\nThank You\n Felix\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 13 Mar 2010 21:04:41 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "Hi Dave,\n\nthank you for your answers! Here some comments:\n\nDave Crooke:\n\n> > * The table just has 5 unused int columns, a timestamp,\n> > OIDs, and the bytea column, *no indices*; the bytea storage\n> > type is 'extended', the 16 MB are compressed to approx. the\n> > half.\n> >\n> \n> Why no indices?\n\nSimply because the test case had just < 50 rows (deleting\nall rows older than 2 minues). Later on I would use indices.\n\n\n> > while it is planned to have the interval set to 6 hours in\n> > the final version (thus creating a FIFO buffer for the\n> > latest 6 hours of inserted data; so the FIFO will keep\n> > approx. 10.000 rows spanning 160-200 GB data).\n> >\n> \n> That's not the way to keep a 6 hour rolling buffer ... what you need to do\n> is run the delete frequently, with *interval '6 hours'* in the SQL acting\n> as the cutoff.\n\nIn fact the delete was run frequently to cut everything\nolder than 6 hours *immediately*.\n\n\n> If you really do want to drop the entire table contents before refilling it,\n> do a *DROP TABLE* and recreate it.\n\nNo, I do not want to drop the whole table.\n\n\n> > * This deletion SQL command was simply repeatedly executed\n> > by pgAdmin while my app kept adding the 16 MB rows.\n> >\n> \n> Are you sure you are timing the delete, and not pgAdmin re-populating some\n> kind of buffer?\n\nQuite sure, yes. Because I launched just the delete command\nin pgAdmin while the rest was executed by my application\noutside pgAdmin, of course.\n\n\n\n> > * Autovacuum is on; I believe I need to keep it on,\n> > otherwise I do not free the disk space, right? If I switch\n> > it off, the deletion time reduces from the average 10s down\n> > to 4s.\n> >\n> \n> You may be running autovaccum too aggressively, it may be interfering with\n> I/O to the tables.\n\nHm, so would should I change then? I wonder if it helps to\nrun autovacuum less aggressive if there will not be a\nsituation were the whole process is stopped for a while. But\nI'd like to understand what to change here.\n\n\n> 8.4 has a lot of performance improvements. It's definitely worth a shot. I'd\n> also consider switching to another OS where you can use a 64-bit version of\n> PG and a much bigger buffer cache.\n\nO.k., I'll give it a try.\n\n\nThank You.\n Felix\n\n",
"msg_date": "Sun, 14 Mar 2010 17:31:05 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "A quick test:\n-----\n1. create table x1(x int, y bytea);\n2. Load some data say with python:\ncp /opt/java/src.zip ~/tmp/a.dat (19MB)\n##\nimport psycopg2\nconn = psycopg2.connect(\"dbname='test' user='*****' password='****'\nhost='127.0.0.1'\");\nconn.cursor().execute(\"INSERT INTO x1 VALUES (1, %s)\",\n(psycopg2.Binary(open(\"a.dat\").read()),))\nconn.commit()\n##\n3. create table x2(x int, y bytea);\n4. Copy table x1 100 times to x2 (1.9GB) and monitor/measure IO:\ninsert into x2 select a x, y from generate_series(1,100) a, x1;\n\nResults:\n-----------\nOn Linux 2.6.32 with an ext3 file system on one 15K rpm disk, we saw with\nSystemTap that the source 1.9GB (19MB x 100) resulted in 5GB of actual disk\nIO and took 61 seconds (52 CPU + 9 sleep/wait for IO)\n\nDeletion (delete from x2) took 32 seconds with 12 seconds CPU and 20 sec\nsleep + wait for IO. Actual disk IO was about 4GB.\n\nSince Pg does not use the concept of rollback segments, it is unclear why\ndeletion produces so much disk IO (4GB).\n\nVJ\n\n\n\nOn Sat, Mar 13, 2010 at 5:17 PM, [email protected] <\[email protected]> wrote:\n\n> Hi all,\n>\n> my posting on 2010-01-14 about the performance when writing\n> bytea to disk caused a longer discussion. While the fact\n> still holds that the overall postgresql write performance is\n> roughly 25% of the serial I/O disk performance this was\n> compensated for my special use case here by doing some other\n> non-postgresql related things in parallel.\n>\n> Now I cannot optimize my processes any further, however, now\n> I am facing another quite unexpected performance issue:\n> Deleting rows from my simple table (with the bytea column)\n> having 16 MB data each, takes roughly as long as writing\n> them!\n>\n> Little more detail:\n>\n> * The table just has 5 unused int columns, a timestamp,\n> OIDs, and the bytea column, no indices; the bytea storage\n> type is 'extended', the 16 MB are compressed to approx. the\n> half.\n>\n> * All the usual optimizations are done to reach better\n> write through (pg_xlog on another disk, much tweaks to the\n> server conf etc), however, this does not matter here, since\n> not the absolute performance is of interest here but the\n> fact that deleting roughly takes 100% of the writing time.\n>\n> * I need to write 15 rows of 16 MB each to disk in a maximum\n> time of 15 s, which is performed here in roughly 10 seconds,\n> however, now I am facing the problem that keeping my\n> database tidy (deleting rows) takes another 5-15 s (10s on\n> average), so my process exceeds the maximum time of 15s for\n> about 5s.\n>\n> * Right now I am deleting like this:\n>\n> DELETE FROM table WHERE (CURRENT_TIMESTAMP -\n> my_timestamp_column) > interval '2 minutes';\n>\n> while it is planned to have the interval set to 6 hours in\n> the final version (thus creating a FIFO buffer for the\n> latest 6 hours of inserted data; so the FIFO will keep\n> approx. 10.000 rows spanning 160-200 GB data).\n>\n> * This deletion SQL command was simply repeatedly executed\n> by pgAdmin while my app kept adding the 16 MB rows.\n>\n> * Autovacuum is on; I believe I need to keep it on,\n> otherwise I do not free the disk space, right? If I switch\n> it off, the deletion time reduces from the average 10s down\n> to 4s.\n>\n> * I am using server + libpq version 8.2.4, currently on\n> WinXP. Will an upgrade to 8.4 help here?\n>\n> Do you have any other ideas to help me out?\n> Oh, please...\n>\n> Thank You\n> Felix\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nA quick test:-----1. create table x1(x int, y bytea);2. Load some data say with python:cp /opt/java/src.zip ~/tmp/a.dat (19MB)##import psycopg2conn = psycopg2.connect(\"dbname='test' user='*****' password='****' host='127.0.0.1'\");\n\nconn.cursor().execute(\"INSERT INTO x1 VALUES (1, %s)\", (psycopg2.Binary(open(\"a.dat\").read()),))conn.commit()##3. create table x2(x int, y bytea);4. Copy table x1 100 times to x2 (1.9GB) and monitor/measure IO:\n\ninsert into x2 select a x, y from generate_series(1,100) a, x1;Results:-----------On Linux 2.6.32 with an ext3 file system on one 15K rpm disk, we saw with SystemTap that the source 1.9GB (19MB x 100) resulted in 5GB of actual disk IO and took 61 seconds (52 CPU + 9 sleep/wait for IO)\nDeletion (delete from x2) took 32 seconds with 12 seconds CPU and 20 sec sleep + wait for IO. Actual disk IO was about 4GB.Since Pg does not use the concept of rollback segments, it is unclear why deletion produces so much disk IO (4GB).\nVJOn Sat, Mar 13, 2010 at 5:17 PM, [email protected] <[email protected]> wrote:\nHi all,\n\nmy posting on 2010-01-14 about the performance when writing\nbytea to disk caused a longer discussion. While the fact\nstill holds that the overall postgresql write performance is\nroughly 25% of the serial I/O disk performance this was\ncompensated for my special use case here by doing some other\nnon-postgresql related things in parallel.\n\nNow I cannot optimize my processes any further, however, now\nI am facing another quite unexpected performance issue:\nDeleting rows from my simple table (with the bytea column)\nhaving 16 MB data each, takes roughly as long as writing\nthem!\n\nLittle more detail:\n\n* The table just has 5 unused int columns, a timestamp,\nOIDs, and the bytea column, no indices; the bytea storage\ntype is 'extended', the 16 MB are compressed to approx. the\nhalf.\n\n* All the usual optimizations are done to reach better\nwrite through (pg_xlog on another disk, much tweaks to the\nserver conf etc), however, this does not matter here, since\nnot the absolute performance is of interest here but the\nfact that deleting roughly takes 100% of the writing time.\n\n* I need to write 15 rows of 16 MB each to disk in a maximum\ntime of 15 s, which is performed here in roughly 10 seconds,\nhowever, now I am facing the problem that keeping my\ndatabase tidy (deleting rows) takes another 5-15 s (10s on\naverage), so my process exceeds the maximum time of 15s for\nabout 5s.\n\n* Right now I am deleting like this:\n\nDELETE FROM table WHERE (CURRENT_TIMESTAMP -\nmy_timestamp_column) > interval '2 minutes';\n\nwhile it is planned to have the interval set to 6 hours in\nthe final version (thus creating a FIFO buffer for the\nlatest 6 hours of inserted data; so the FIFO will keep\napprox. 10.000 rows spanning 160-200 GB data).\n\n* This deletion SQL command was simply repeatedly executed\nby pgAdmin while my app kept adding the 16 MB rows.\n\n* Autovacuum is on; I believe I need to keep it on,\notherwise I do not free the disk space, right? If I switch\nit off, the deletion time reduces from the average 10s down\nto 4s.\n\n* I am using server + libpq version 8.2.4, currently on\nWinXP. Will an upgrade to 8.4 help here?\n\nDo you have any other ideas to help me out?\nOh, please...\n\nThank You\n Felix\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 15 Mar 2010 09:54:13 -0400",
"msg_from": "VJK <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> wrote:\n \n> Simply because the test case had just < 50 rows (deleting\n> all rows older than 2 minues). Later on I would use indices.\n \nRunning a performance test with 50 rows without indexes and\nextrapolating to a much larger data set with indexes won't tell you\nmuch. The plans chosen by the PostgreSQL optimizer will probably be\ncompletely different, and the behavior of the caches (at all levels)\nwill be very different.\n \n>> > while it is planned to have the interval set to 6 hours in\n>> > the final version (thus creating a FIFO buffer for the\n>> > latest 6 hours of inserted data; so the FIFO will keep\n>> > approx. 10.000 rows spanning 160-200 GB data).\n \nThis might lend itself to partitioning. Dropping a partition\ncontaining data older than six hours would be very fast. Without\nknowing what kinds of queries you want to run on the data, it's hard\nto predict the performance impact on your other operations, though.\n \n>> > * Autovacuum is on; I believe I need to keep it on,\n>> > otherwise I do not free the disk space, right? If I switch\n>> > it off, the deletion time reduces from the average 10s down\n>> > to 4s.\n>> >\n>> \n>> You may be running autovaccum too aggressively, it may be\n>> interfering with I/O to the tables.\n> \n> Hm, so would should I change then? I wonder if it helps to\n> run autovacuum less aggressive if there will not be a\n> situation were the whole process is stopped for a while. But\n> I'd like to understand what to change here.\n \nI'd be very careful about this, I've seen performance problems more\noften (and more dramatic) from not running it aggressively enough. \nMonitor performance and bloat closely when you adjust this, and make\nsure the data and load are modeling what you expect in production,\nor you'll tune for the wrong environment and likely make matters\nworse for the environment that really matters.\n \n-Kevin\n",
"msg_date": "Mon, 15 Mar 2010 09:01:41 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4\n\t differences"
},
{
"msg_contents": "VJK <[email protected]> wrote:\n \n> the source 1.9GB (19MB x 100) resulted in 5GB of actual disk IO\n \n> Deletion (delete from x2) took 32 seconds with 12 seconds CPU and\n> 20 sec sleep + wait for IO. Actual disk IO was about 4GB.\n> \n> Since Pg does not use the concept of rollback segments, it is\n> unclear why deletion produces so much disk IO (4GB).\n \nOne delete would mark the xmax of the tuple, so that transactions\nwithout that transaction ID in their visible set would ignore it. \nThe next table scan would set hint bits, which would store\ninformation within the tuple to indicate that the deleting\ntransaction successfully committed, then the vacuum would later wake\nup and rewrite the page with the deleted tuples removed.\n \nIf you have enough battery backed cache space on a hardware RAID\ncontroller card, and that cache is configured in write-back mode,\nmany of these writes might be combined -- the original delete, the\nhint bit write, and the vacuum might all combine into one physical\nwrite to disk. What does your disk system look like, exactly?\n \n-Kevin\n",
"msg_date": "Mon, 15 Mar 2010 09:12:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4\n\t differences"
},
{
"msg_contents": "VJK wrote:\n> Since Pg does not use the concept of rollback segments, it is unclear \n> why deletion produces so much disk IO (4GB).\n\nWith PostgreSQL's write-ahead log, MVCC and related commit log, and \ntransactional DDL features, there's actually even more overhead that can \nbe involved than a simple rollback segment design when you delete things:\n\nhttp://www.postgresql.org/docs/current/static/wal.html\nhttp://www.postgresql.org/docs/current/static/mvcc-intro.html\nhttp://wiki.postgresql.org/wiki/Hint_Bits\nhttp://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis\n\nOne fun thing to try here is to increase shared_buffers and \ncheckpoint_segments, then see if the total number of writes go down. \nThe defaults for both are really low, which makes buffer page writes \nthat might otherwise get combined as local memory changes instead get \npushed constantly to disk.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 15 Mar 2010 10:42:42 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "Inline:\n\nOn Mon, Mar 15, 2010 at 10:12 AM, Kevin Grittner <\[email protected]> wrote:\n\n> VJK <[email protected]> wrote:\n>\n> > the source 1.9GB (19MB x 100) resulted in 5GB of actual disk IO\n>\n> > Deletion (delete from x2) took 32 seconds with 12 seconds CPU and\n> > 20 sec sleep + wait for IO. Actual disk IO was about 4GB.\n> >\n> > Since Pg does not use the concept of rollback segments, it is\n> > unclear why deletion produces so much disk IO (4GB).\n>\n> One delete would mark the xmax of the tuple, so that transactions\n> without that transaction ID in their visible set would ignore it.\n> The next table scan would set hint bits, which would store\n> information within the tuple to indicate that the deleting\n> transaction successfully committed, then the vacuum would later wake\n> up and rewrite the page with the deleted tuples removed.\n>\n\nI did not observe any vacuuming activity during the deletion process.\nHowever, even with vacuuming, 4GB of disk IO is rather excessive for\ndeleting 1.9GB of data.\n\n>\n> If you have enough battery backed cache space on a hardware RAID\n> controller card, and that cache is configured in write-back mode,\n> many of these writes might be combined -- the original delete, the\n> hint bit write, and the vacuum might all combine into one physical\n> write to disk.\n\n\nThey are combined alright, I see between 170-200 MB/s IO spikes on the iotop\nscreen which means writes to the cache -- the disk itself is capable of\n110(ic)-160(oc) MB/s only, with sequential 1MB block size writes.\n\nWhat does your disk system look like, exactly?\n>\n\nAs I wrote before, it's actually a single 15K rpm mirrored pair that you\ncan look at as a single disk for performance purposes. It is connected\nthrough a PERC6i controller to a Dell 2950.\n\nThe disk subsystem is not really important here. What is really\ninteresting, why so much IO is generated during the deletion process ?\n\n>\n> -Kevin\n>\n\nInline:On Mon, Mar 15, 2010 at 10:12 AM, Kevin Grittner <[email protected]> wrote:\nVJK <[email protected]> wrote:\n\n> the source 1.9GB (19MB x 100) resulted in 5GB of actual disk IO\n\n> Deletion (delete from x2) took 32 seconds with 12 seconds CPU and\n> 20 sec sleep + wait for IO. Actual disk IO was about 4GB.\n>\n> Since Pg does not use the concept of rollback segments, it is\n> unclear why deletion produces so much disk IO (4GB).\n\nOne delete would mark the xmax of the tuple, so that transactions\nwithout that transaction ID in their visible set would ignore it.\nThe next table scan would set hint bits, which would store\ninformation within the tuple to indicate that the deleting\ntransaction successfully committed, then the vacuum would later wake\nup and rewrite the page with the deleted tuples removed. I did not observe any vacuuming activity during the deletion process. However, even with vacuuming, 4GB of disk IO is rather excessive for deleting 1.9GB of data.\n\n\nIf you have enough battery backed cache space on a hardware RAID\ncontroller card, and that cache is configured in write-back mode,\nmany of these writes might be combined -- the original delete, the\nhint bit write, and the vacuum might all combine into one physical\nwrite to disk. They are combined alright, I see between 170-200 MB/s IO spikes on the iotop screen which means writes to the cache -- the disk itself is capable of 110(ic)-160(oc) MB/s only, with sequential 1MB block size writes.\nWhat does your disk system look like, exactly?As I wrote before, it's actually a single 15K rpm mirrored pair that you can look at as a single disk for performance purposes. It is connected through a PERC6i controller to a Dell 2950. \nThe disk subsystem is not really important here. What is really interesting, why so much IO is generated during the deletion process ?\n\n-Kevin",
"msg_date": "Mon, 15 Mar 2010 10:46:05 -0400",
"msg_from": "VJK <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> VJK wrote:\n>> Since Pg does not use the concept of rollback segments, it is unclear \n>> why deletion produces so much disk IO (4GB).\n\n> With PostgreSQL's write-ahead log, MVCC and related commit log, and \n> transactional DDL features, there's actually even more overhead that can \n> be involved than a simple rollback segment design when you delete things:\n\nFor an example like this one, you have to keep in mind that the\ntoast-table rows for the large bytea value have to be marked deleted,\ntoo. Also, since I/O happens in units of pages, the I/O volume to\ndelete a tuple is just as much as the I/O to create it. (The WAL\nentry for deletion might be smaller, but that's all.) So it is entirely\nunsurprising that \"DELETE FROM foo\" is about as expensive as filling the\ntable initially.\n\nIf deleting a whole table is significant for you performance-wise,\nyou might look into using TRUNCATE instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Mar 2010 10:53:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences "
},
{
"msg_contents": "On Mon, 15 Mar 2010, Tom Lane wrote:\n> For an example like this one, you have to keep in mind that the\n> toast-table rows for the large bytea value have to be marked deleted,\n> too. Also, since I/O happens in units of pages, the I/O volume to\n> delete a tuple is just as much as the I/O to create it. (The WAL\n> entry for deletion might be smaller, but that's all.) So it is entirely\n> unsurprising that \"DELETE FROM foo\" is about as expensive as filling the\n> table initially.\n>\n> If deleting a whole table is significant for you performance-wise,\n> you might look into using TRUNCATE instead.\n\nWhat are the implications of using TRUNCATE on a table that has TOASTed \ndata? Is TOAST all stored in one single table, or is it split up by owner \ntable/column name? Might you still end up with a normal delete operation \non the TOAST table when performing a TRUNCATE on the owner table?\n\nMatthew\n\n-- \nsed -e '/^[when][coders]/!d;/^...[discover].$/d;/^..[real].[code]$/!d\n' <`locate dict/words`\n",
"msg_date": "Mon, 15 Mar 2010 15:18:07 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences "
},
{
"msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Mon, 15 Mar 2010, Tom Lane wrote:\n>> If deleting a whole table is significant for you performance-wise,\n>> you might look into using TRUNCATE instead.\n\n> Might you still end up with a normal delete operation \n> on the TOAST table when performing a TRUNCATE on the owner table?\n\nNo, you get a TRUNCATE on its toast table too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Mar 2010 11:26:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences "
},
{
"msg_contents": "Inline:\n\nOn Mon, Mar 15, 2010 at 10:42 AM, Greg Smith <[email protected]> wrote:\n\n> VJK wrote:\n>\n>> Since Pg does not use the concept of rollback segments, it is unclear why\n>> deletion produces so much disk IO (4GB).\n>>\n>\n> With PostgreSQL's write-ahead log, MVCC and related commit log, and\n> transactional DDL features, there's actually even more overhead that can be\n> involved than a simple rollback segment design when you delete things:\n>\n\nThere does not appear to be much WAL activity. Here's the insertion of 100\nrows as seen by iotop:\n 4.39 G 0.00 % 9.78 % postgres: writer process\n 5.34 G 0.00 % 5.93 % postgres: postgr~0.5.93(1212) idle\n 27.84 M 0.00 % 1.77 % postgres: wal writer process\n 144.00 K 0.00 % 0.00 % postgres: stats collector process\n 0.00 B 0.00 % 0.00 % postgres: autova~ launcher process\n 0.00 B 0.00 % 0.00 % postgres: postgr~0.5.93(4632) idle\n\n\n\n.. and the deletion:\n288.18 M 0.00 % 37.80 % postgres: writer process\n 3.41 G 0.00 % 19.76 % postgres: postgr~0.5.93(1212) DELETE\n 27.27 M 0.00 % 3.18 % postgres: wal writer process\n 72.00 K 0.00 % 0.03 % postgres: stats collector process\n 0.00 B 0.00 % 0.00 % postgres: autova~ launcher process\n 0.00 B 0.00 % 0.00 % postgres: postgr~0.5.93(4632) idle\n\nSo, the original 1.9 GB of useful data generate about 10GB of IO, 5 of which\nend up being written to the disk The deletion generates about 3.8 GB of IO\nall of which results in disk IO. WAL activity is about 27MB in both cases.\n\n\n>\n> http://www.postgresql.org/docs/current/static/wal.html\n> http://www.postgresql.org/docs/current/static/mvcc-intro.html\n> http://wiki.postgresql.org/wiki/Hint_Bits\n>\n> http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis\n>\n>\nI read all of the above, but it does not really clarify why deletion\ngenerates so much IO.\n\n\n> One fun thing to try here is to increase shared_buffers and\n> checkpoint_segments, then see if the total number of writes go down. The\n> defaults for both are really low, which makes buffer page writes that might\n> otherwise get combined as local memory changes instead get pushed constantly\n> to disk.\n>\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n\nInline:On Mon, Mar 15, 2010 at 10:42 AM, Greg Smith <[email protected]> wrote:\nVJK wrote:\n\nSince Pg does not use the concept of rollback segments, it is unclear why deletion produces so much disk IO (4GB).\n\n\nWith PostgreSQL's write-ahead log, MVCC and related commit log, and transactional DDL features, there's actually even more overhead that can be involved than a simple rollback segment design when you delete things:\nThere does not appear to be much WAL activity. Here's the insertion of 100 rows as seen by iotop: 4.39 G 0.00 % 9.78 % postgres: writer process 5.34 G 0.00 % 5.93 % postgres: postgr~0.5.93(1212) idle \n 27.84 M 0.00 % 1.77 % postgres: wal writer process 144.00 K 0.00 % 0.00 % postgres: stats collector process 0.00 B 0.00 % 0.00 % postgres: autova~ launcher process 0.00 B 0.00 % 0.00 % postgres: postgr~0.5.93(4632) idle \n .. and the deletion:288.18 M 0.00 % 37.80 % postgres: writer process 3.41 G 0.00 % 19.76 % postgres: postgr~0.5.93(1212) DELETE\n 27.27 M 0.00 % 3.18 % postgres: wal writer process 72.00 K 0.00 % 0.03 % postgres: stats collector process 0.00 B 0.00 % 0.00 % postgres: autova~ launcher process 0.00 B 0.00 % 0.00 % postgres: postgr~0.5.93(4632) idle \nSo, the original 1.9 GB of useful data generate about 10GB of IO, 5 of which end up being written to the disk The deletion generates about 3.8 GB of IO all of which results in disk IO. WAL activity is about 27MB in both cases.\n \n\nhttp://www.postgresql.org/docs/current/static/wal.html\nhttp://www.postgresql.org/docs/current/static/mvcc-intro.html\nhttp://wiki.postgresql.org/wiki/Hint_Bits\nhttp://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis\nI read all of the above, but it does not really clarify why deletion generates so much IO. \n\nOne fun thing to try here is to increase shared_buffers and checkpoint_segments, then see if the total number of writes go down. The defaults for both are really low, which makes buffer page writes that might otherwise get combined as local memory changes instead get pushed constantly to disk.\n\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us",
"msg_date": "Mon, 15 Mar 2010 11:28:15 -0400",
"msg_from": "VJK <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
},
{
"msg_contents": "On Mon, Mar 15, 2010 at 10:53 AM, Tom Lane <[email protected]> wrote:\n\n> Greg Smith <[email protected]> writes:\n> > VJK wrote:\n> >> Since Pg does not use the concept of rollback segments, it is unclear\n> >> why deletion produces so much disk IO (4GB).\n>\n> For an example like this one, you have to keep in mind that the\n> toast-table rows for the large bytea value have to be marked deleted,\n> too. Also, since I/O happens in units of pages, the I/O volume to\n> delete a tuple is just as much as the I/O to create it.\n>\n\nThat makes sense.\n\n\n> regards, tom lane\n>\n\nOn Mon, Mar 15, 2010 at 10:53 AM, Tom Lane <[email protected]> wrote:\nGreg Smith <[email protected]> writes:\n> VJK wrote:\n>> Since Pg does not use the concept of rollback segments, it is unclear\n>> why deletion produces so much disk IO (4GB).\n\nFor an example like this one, you have to keep in mind that the\ntoast-table rows for the large bytea value have to be marked deleted,\ntoo. Also, since I/O happens in units of pages, the I/O volume to\ndelete a tuple is just as much as the I/O to create it. That makes sense. \n\n regards, tom lane",
"msg_date": "Mon, 15 Mar 2010 12:37:57 -0400",
"msg_from": "VJK <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting bytea, autovacuum, and 8.2/8.4 differences"
}
] |
[
{
"msg_contents": "Evening all,\n\nMaiden post to this list. I've a performance problem for which I'm \nuncharacteristically in need of good advice.\n\nI have a read-mostly database using 51GB on an ext3 filesystem on a \nserver running Ubuntu 9.04 and PG 8.3. Forty hours ago I started a \nplain-format dump, compressed with -Z9, and it is still running, having \nproduced 32GB of an expected 40 - 45GB of compressed output. CPU load \nis 100% on the core executing pg_dump, and negligible on all others \ncores. The system is read-mostly, and largely idle. The exact \ninvocation was:\n\n nohup time pg_dump -f database.dmp -Z9 database\n\n\nI presumed pg_dump was CPU-bound because of gzip compression, but a test \nI ran makes that seem unlikely: Copying the database files to a USB hard \ndrive (cp -r /var/lib/postgresql/8.3/main /mnt) took 25 minutes; and \ngzip-compressing the first first 500MB of the dumpfile (dd \nif=database.dmp bs=64k count=16000 | time gzip -9 > dd.gz) took one \nminute and 15 seconds; to gzip the complete 51GB set of files should \ntake no more than 90 minutes.\n\nThe database is unremarkable except for one table, the biggest, which \ncontains a bytea column, and which pg_dump has been outputting for at \nleast 39 hours. That table has 276,292 rows, in which the bytea for \n140,695 contains PDFs totalling 32,791MB, and the bytea for the \nremaining 135,597 rows contains PostScript totalling 602MB. I think \nI've never done a full vacuum; only ever auto-vacuum; however I did copy \nthe table to new, deleted the old, and renamed, which I expect is \neffectively equivalent for it; which is described by the following schema:\n\n Table \"database.bigtable\"\n Column | Type | Modifiers \n--------------+-------------------+--------------------\n headerid | integer | not null\n member | numeric(10,0) | not null\n postcode | character varying |\n bsp | character varying |\n details | bytea | not null\n membertypeid | integer | not null default 0\nIndexes:\n \"bigtable_pkey\" PRIMARY KEY, btree (headerid, member)\n \"bigtable_member\" btree (member)\nForeign-key constraints:\n \"bigtable_headerid_fkey\" FOREIGN KEY (headerid) REFERENCES header(headerid)\n\n\nThe following describes the application environment:\n\n * PostgreSQL 8.3.8 on i486-pc-linux-gnu, compiled by GCC\n gcc-4.3.real (Ubuntu 4.3.3-5ubuntu4) 4.3.3\n * pg_dump (PostgreSQL) 8.3.8\n * Ubuntu 9.04\n * Linux server 2.6.28-13-server #45-Ubuntu SMP Tue Jun 30 20:51:10\n UTC 2009 i686 GNU/Linux\n * Intel(R) Xeon(R) CPU E5430 @ 2.66GHz (4 core)\n * RAM 2GB\n * 2 SATA, 7200rpm disks with hardware RAID-1 (IBM ServeRAID)\n\n\nMy question is, what's going on?\n\nThanks,\n\nDavid\n\n\n\n\n\nEvening all,\n\nMaiden post to this list. I've a performance problem for which I'm\nuncharacteristically in need of good advice.\n\nI have a read-mostly database using 51GB on an ext3 filesystem on a\nserver running Ubuntu 9.04 and PG 8.3. Forty hours ago I started a\nplain-format dump, compressed with -Z9, and it is still running, having\nproduced 32GB of an expected 40 - 45GB of compressed output. CPU load\nis 100% on the core executing pg_dump, and negligible on all others\ncores. The system is read-mostly, and largely idle. The exact\ninvocation was:\n\n nohup time pg_dump -f database.dmp -Z9 database\n\nI presumed pg_dump was CPU-bound because of gzip compression, but a\ntest I ran makes that seem unlikely: Copying the database files to a\nUSB hard drive (cp -r /var/lib/postgresql/8.3/main /mnt) took 25\nminutes; and gzip-compressing the first first 500MB of the dumpfile (dd\nif=database.dmp bs=64k count=16000 | time gzip -9 > dd.gz) took one\nminute and 15 seconds; to gzip the complete 51GB set of files should\ntake no more than 90 minutes.\n\nThe database is unremarkable except for one table, the biggest, which\ncontains a bytea column, and which pg_dump has been outputting for at\nleast 39 hours. That table has 276,292 rows, in which the bytea for\n140,695 contains PDFs totalling 32,791MB, and the bytea for the\nremaining 135,597 rows contains PostScript totalling 602MB. I think\nI've never done a full vacuum; only ever auto-vacuum; however I did\ncopy the table to new, deleted the old, and renamed, which I expect is\neffectively equivalent for it; which is described by the following\nschema:\n\n Table \"database.bigtable\"\n Column | Type | Modifiers \n--------------+-------------------+--------------------\n headerid | integer | not null\n member | numeric(10,0) | not null\n postcode | character varying |\n bsp | character varying |\n details | bytea | not null\n membertypeid | integer | not null default 0\nIndexes:\n \"bigtable_pkey\" PRIMARY KEY, btree (headerid, member)\n \"bigtable_member\" btree (member)\nForeign-key constraints:\n \"bigtable_headerid_fkey\" FOREIGN KEY (headerid) REFERENCES header(headerid)\n\n\nThe following describes the application environment:\n\nPostgreSQL 8.3.8 on i486-pc-linux-gnu, compiled by GCC\ngcc-4.3.real (Ubuntu 4.3.3-5ubuntu4) 4.3.3\npg_dump (PostgreSQL) 8.3.8\nUbuntu 9.04\nLinux server 2.6.28-13-server #45-Ubuntu SMP Tue Jun 30 20:51:10\nUTC 2009 i686 GNU/Linux\nIntel(R) Xeon(R) CPU E5430 @ 2.66GHz (4 core)\nRAM 2GB\n2 SATA, 7200rpm disks with hardware RAID-1 (IBM ServeRAID)\n\n\nMy question is, what's going on?\n\nThanks,\n\nDavid",
"msg_date": "Sun, 14 Mar 2010 18:31:37 +1030",
"msg_from": "David Newall <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump far too slow"
},
{
"msg_contents": "David Newall <[email protected]> writes:\n> [ very slow pg_dump of table with large bytea data ]\n\nDid you look at \"vmstat 1\" output to see whether the system was under\nany large I/O load?\n\nDumping large bytea data is known to be slow for a couple of reasons:\n\n1. The traditional text output format for bytea is a bit poorly chosen.\nIt's not especially cheap to generate and it interacts very badly with\nCOPY processing, since it tends to contain lots of backslashes which\nthen have to be escaped by COPY.\n\n2. Pulling the data from the out-of-line \"toast\" table can be expensive\nif it ends up seeking all over the disk to do it. This will show up as\na lot of seeking and I/O wait, rather than CPU expense.\n\nSince you mention having recently recopied the table into a new table,\nI would guess that the toast table is reasonably well-ordered and so\neffect #2 shouldn't be a big issue. But it's a good idea to check.\n\nPG 9.0 is changing the default bytea output format to hex, in part\nto solve problem #1. That doesn't help you in an 8.3 installation\nof course. If you're desperate you could consider excluding this\ntable from your pg_dumps and backing it up separately via COPY BINARY.\nThe PITA factor of that might be more than you can stand though.\nOffhand I can't think of any other way to ameliorate the problem\nin 8.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Mar 2010 16:21:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow "
},
{
"msg_contents": "On Sun, Mar 14, 2010 at 4:01 AM, David Newall\n<[email protected]> wrote:\n> an expected 40 - 45GB of compressed output. CPU load is 100% on the core\n> executing pg_dump, and negligible on all others cores. The system is\n> read-mostly, and largely idle. The exact invocation was:\n>\n> nohup time pg_dump -f database.dmp -Z9 database\n\nCan you connect a few times with gdb and do \"bt\" to get a backtrace?\nThat might shed some light on where it's spending all of its time.\n\n...Robert\n",
"msg_date": "Mon, 15 Mar 2010 21:53:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "As a fellow PG newbie, some thoughts / ideas ....\n\n1. What is the prupose of the dump (backup, migration, ETL, etc.)? Why\nplain? Unless you have a need to load this into a different brand of\ndatabase at short notice, I'd use native format.\n\n2. If you goal is indeed to get the data into another DB, use an app which\ncan do a binary-to-binary transfer, e.g. a little homegrown tool in Java\nthat connects to both with JDBC, or a data migration ETL tool.\n\n3. If pg_dump is still CPU bound, then don't get pg_dump to compress the\narchive, instead do *pg_dump -F c -Z 0 ... | gzip >foo.dmp.gz* ... this\nway the compression runs on a different core from the formatting\n\n4. Don't use *-Z9*, the return on investment isn't worth it (esp. if you are\nCPU bound), use the default GZIP compression instead, or if you need to\nminimize storage, experiment with higher levels until the CPU running GZIP\nis close to, but not totally, maxed out.\n\n5. I see no other drives mentioned ... is your dump being written to a\npartition on the same RAID-1 pair that PG is running on? Spring for another\ndrive to avoid the seek contention ... even if you were to stream the dump\nto a temporary filesystem on a single commodity consumer drive ($99 for a\n1.5TB SATA-300 spindle) with no RAID, you could then copy it back to the\nRAID set after pg_dump completes, and I'd give you good odds it'd be a\nquicker end to end process.\n\nCheers\nDave\n\nOn Sun, Mar 14, 2010 at 3:01 AM, David Newall <[email protected]>wrote:\n\n> Evening all,\n>\n> Maiden post to this list. I've a performance problem for which I'm\n> uncharacteristically in need of good advice.\n>\n> I have a read-mostly database using 51GB on an ext3 filesystem on a server\n> running Ubuntu 9.04 and PG 8.3. Forty hours ago I started a plain-format\n> dump, compressed with -Z9, and it is still running, having produced 32GB of\n> an expected 40 - 45GB of compressed output.\n>\n<snip>\n\nAs a fellow PG newbie, some thoughts / ideas ....1. What is the prupose of the dump (backup, migration, ETL, etc.)? Why plain? Unless you have a need to load this into a different brand of database at short notice, I'd use native format.\n2. If you goal is indeed to get the data into another DB, use an app which can do a binary-to-binary transfer, e.g. a little homegrown tool in Java that connects to both with JDBC, or a data migration ETL tool.\n3. If pg_dump is still CPU bound, then don't get pg_dump to compress the archive, instead do pg_dump -F c -Z 0 ... | gzip >foo.dmp.gz ... this way the compression runs on a different core from the formatting\n4. Don't use -Z9, the return on investment isn't worth it (esp. if you are CPU bound), use the default GZIP compression instead, or if you need to minimize storage, experiment with higher levels until the CPU running GZIP is close to, but not totally, maxed out.\n5. I see no other drives mentioned ... is your dump being written to a partition on the same RAID-1 pair that PG is running on? Spring for another drive to avoid the seek contention ... even if you were to stream the dump to a temporary filesystem on a single commodity consumer drive ($99 for a 1.5TB SATA-300 spindle) with no RAID, you could then copy it back to the RAID set after pg_dump completes, and I'd give you good odds it'd be a quicker end to end process.\nCheersDaveOn Sun, Mar 14, 2010 at 3:01 AM, David Newall <[email protected]> wrote:\n\nEvening all,\n\nMaiden post to this list. I've a performance problem for which I'm\nuncharacteristically in need of good advice.\n\nI have a read-mostly database using 51GB on an ext3 filesystem on a\nserver running Ubuntu 9.04 and PG 8.3. Forty hours ago I started a\nplain-format dump, compressed with -Z9, and it is still running, having\nproduced 32GB of an expected 40 - 45GB of compressed output. \n\n<snip>",
"msg_date": "Mon, 15 Mar 2010 22:04:45 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "On Sun, 14 Mar 2010, David Newall wrote:\n> nohup time pg_dump -f database.dmp -Z9 database\n>\n> I presumed pg_dump was CPU-bound because of gzip compression, but a test I \n> ran makes that seem unlikely...\n\nThere was some discussion about this a few months ago at \nhttp://archives.postgresql.org/pgsql-performance/2009-07/msg00348.php\n\nIt seems that getting pg_dump to do the compression is a fair amount \nslower than piping the plain format dump straight through gzip. You get a \nbit more parallelism that way too.\n\nMatthew\n\n\n-- \n I'm always interested when [cold callers] try to flog conservatories.\n Anyone who can actually attach a conservatory to a fourth floor flat\n stands a marginally better than average chance of winning my custom.\n (Seen on Usenet)\n",
"msg_date": "Thu, 18 Mar 2010 12:15:01 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "Thanks for all of the suggestions, guys, which gave me some pointers on \nnew directions to look, and I learned some interesting things.\n\nThe first interesting thing was that piping (uncompressed) pg_dump into \ngzip, instead of using pg_dump's internal compressor, does bring a lot \nof extra parallelism into play. (Thank you, Matthew Wakeling.) I \nobserved gzip using 100% CPU, as expected, and also two, count them, two \npostgres processes collecting data, each consuming a further 80% CPU. \nIt seemed to me that Postgres was starting and stopping these to match \nthe capacity of the consumer (i.e. pg_dump and gzip.) Very nice. \nUnfortunately one of these processes dropped eventually, and, according \nto top, the only non-idle process running was gzip (100%.) Obviously \nthere were postgress and pg_dump processes, too, but they were throttled \nby gzip's rate of output and effectively idle (less than 1% CPU). That \nis also interesting. The final output from gzip was being produced at \nthe rate of about 0.5MB/second, which seems almost unbelievably slow.\n\nI next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the \ncomplete 34GB file in 30 minutes (a good result.) I then compressed \nthat with gzip, which took an hour and reduced the file to 32GB (hardly \nworth the effort) for a total run time of 90 minutes. In that instance, \ngzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0 \nto see how quickly that would dump the file. I had the idea that I'd go \non to see how quickly gzip would compress it, but unfortunately it \nfilled my disk before finishing (87GB at that point), so there's \nsomething worth knowing: pg_dump's output for binary data is very much \nless compact than COPY WITH BINARY; all those backslashes, as Tom \npointed out. For the aforementioned reason, I didn't get to see how \ngzip would perform. For the record, pg_dump with no compression \nproduced output at the rate of 26MB/second; a rather meaningless number \ngiven the 200%+ expansion of final output.\n\nI am now confident the performance problem is from gzip, not Postgres \nand wonder if I should read up on gzip to find why it would work so \nslowly on a pure text stream, albeit a representation of PDF which \nintrinsically is fairly compressed. Given the spectacular job that \npostgres did in adjusting it's rate of output to match the consumer \nprocess, I did wonder if there might have been a tragic interaction \nbetween postgres and gzip; perhaps postgres limits its rate of output to \nmatch gzip; and gzip tries to compress what's available, that being only \na few bytes; and perhaps that might be so inefficient that it hogs the \nCPU; but it don't think that likely. I had a peek at gzip's source \n(surprisingly readable) and on first blush it does seem that unfortunate \ninput could result in only a few bytes being written each time through \nthe loop, meaning only a few more bytes could be read in.\n\nJust to complete the report, I created a child table to hold the PDF's, \nwhich are static, and took a dump of just that table, and adjusted my \nbackup command to exclude it. Total size of compressed back sans PDFs \ncirca 7MB taking around 30 seconds.\n",
"msg_date": "Sun, 21 Mar 2010 23:47:40 +1030",
"msg_from": "David Newall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "On 21/03/2010 9:17 PM, David Newall wrote:\n> Thanks for all of the suggestions, guys, which gave me some pointers on\n> new directions to look, and I learned some interesting things.\n>\n\n> Unfortunately one of these processes dropped eventually, and, according\n> to top, the only non-idle process running was gzip (100%.) Obviously\n> there were postgress and pg_dump processes, too, but they were throttled\n> by gzip's rate of output and effectively idle (less than 1% CPU). That\n> is also interesting. The final output from gzip was being produced at\n> the rate of about 0.5MB/second, which seems almost unbelievably slow.\n\nCPU isn't the only measure of interest here.\n\nIf pg_dump and the postgres backend it's using are doing simple work \nsuch as reading linear data from disk, they won't show much CPU activity \neven though they might be running full-tilt. They'll be limited by disk \nI/O or other non-CPU resources.\n\n> and wonder if I should read up on gzip to find why it would work so\n> slowly on a pure text stream, albeit a representation of PDF which\n> intrinsically is fairly compressed.\n\nIn fact, PDF uses deflate compression, the same algorithm used for gzip. \nGzip-compressing PDF is almost completely pointless - all you're doing \nis compressing some of the document structure, not the actual content \nstreams. With PDF 1.5 and above using object and xref streams, you might \nnot even be doing that, instead only compressing the header and trailer \ndictionary, which are probably in the order of a few hundred bytes.\n\nCompressing PDF documents is generally a waste of time.\n\n--\nCraig Ringer\n",
"msg_date": "Sun, 21 Mar 2010 21:56:38 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "One more from me ....\n\nIf you think that the pipe to GZIP may be causing pg_dump to stall, try\nputting something like buffer(1) in the pipeline ... it doesn't generally\ncome with Linux, but you can download source or create your own very easily\n... all it needs to do is asynchronously poll stdin and write stdout. I\nwrote one in Perl when I used to do a lot of digital video hacking, and it\nhelped with chaining together tools like mplayer and mpeg.\n\nHowever, my money says that Tom's point about it being (disk) I/O bound is\ncorrect :-)\n\nCheers\nDave\n\nOn Sun, Mar 21, 2010 at 8:17 AM, David Newall <[email protected]>wrote:\n\n> Thanks for all of the suggestions, guys, which gave me some pointers on new\n> directions to look, and I learned some interesting things.\n>\n> The first interesting thing was that piping (uncompressed) pg_dump into\n> gzip, instead of using pg_dump's internal compressor, does bring a lot of\n> extra parallelism into play. (Thank you, Matthew Wakeling.) I observed\n> gzip using 100% CPU, as expected, and also two, count them, two postgres\n> processes collecting data, each consuming a further 80% CPU. It seemed to\n> me that Postgres was starting and stopping these to match the capacity of\n> the consumer (i.e. pg_dump and gzip.) Very nice. Unfortunately one of\n> these processes dropped eventually, and, according to top, the only non-idle\n> process running was gzip (100%.) Obviously there were postgress and pg_dump\n> processes, too, but they were throttled by gzip's rate of output and\n> effectively idle (less than 1% CPU). That is also interesting. The final\n> output from gzip was being produced at the rate of about 0.5MB/second, which\n> seems almost unbelievably slow.\n>\n> I next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the\n> complete 34GB file in 30 minutes (a good result.) I then compressed that\n> with gzip, which took an hour and reduced the file to 32GB (hardly worth the\n> effort) for a total run time of 90 minutes. In that instance, gzip produced\n> output at the rate of 10MB/second, so I tried pg_dump -Z0 to see how quickly\n> that would dump the file. I had the idea that I'd go on to see how quickly\n> gzip would compress it, but unfortunately it filled my disk before finishing\n> (87GB at that point), so there's something worth knowing: pg_dump's output\n> for binary data is very much less compact than COPY WITH BINARY; all those\n> backslashes, as Tom pointed out. For the aforementioned reason, I didn't\n> get to see how gzip would perform. For the record, pg_dump with no\n> compression produced output at the rate of 26MB/second; a rather meaningless\n> number given the 200%+ expansion of final output.\n>\n> I am now confident the performance problem is from gzip, not Postgres and\n> wonder if I should read up on gzip to find why it would work so slowly on a\n> pure text stream, albeit a representation of PDF which intrinsically is\n> fairly compressed. Given the spectacular job that postgres did in adjusting\n> it's rate of output to match the consumer process, I did wonder if there\n> might have been a tragic interaction between postgres and gzip; perhaps\n> postgres limits its rate of output to match gzip; and gzip tries to compress\n> what's available, that being only a few bytes; and perhaps that might be so\n> inefficient that it hogs the CPU; but it don't think that likely. I had a\n> peek at gzip's source (surprisingly readable) and on first blush it does\n> seem that unfortunate input could result in only a few bytes being written\n> each time through the loop, meaning only a few more bytes could be read in.\n>\n> Just to complete the report, I created a child table to hold the PDF's,\n> which are static, and took a dump of just that table, and adjusted my backup\n> command to exclude it. Total size of compressed back sans PDFs circa 7MB\n> taking around 30 seconds.\n>\n\nOne more from me ....If you think that the pipe to GZIP may be causing pg_dump to stall, try putting something like buffer(1) in the pipeline ... it doesn't generally come with Linux, but you can download source or create your own very easily ... all it needs to do is asynchronously poll stdin and write stdout. I wrote one in Perl when I used to do a lot of digital video hacking, and it helped with chaining together tools like mplayer and mpeg.\nHowever, my money says that Tom's point about it being (disk) I/O bound is correct :-)CheersDaveOn Sun, Mar 21, 2010 at 8:17 AM, David Newall <[email protected]> wrote:\nThanks for all of the suggestions, guys, which gave me some pointers on new directions to look, and I learned some interesting things.\n\nThe first interesting thing was that piping (uncompressed) pg_dump into gzip, instead of using pg_dump's internal compressor, does bring a lot of extra parallelism into play. (Thank you, Matthew Wakeling.) I observed gzip using 100% CPU, as expected, and also two, count them, two postgres processes collecting data, each consuming a further 80% CPU. It seemed to me that Postgres was starting and stopping these to match the capacity of the consumer (i.e. pg_dump and gzip.) Very nice. Unfortunately one of these processes dropped eventually, and, according to top, the only non-idle process running was gzip (100%.) Obviously there were postgress and pg_dump processes, too, but they were throttled by gzip's rate of output and effectively idle (less than 1% CPU). That is also interesting. The final output from gzip was being produced at the rate of about 0.5MB/second, which seems almost unbelievably slow.\n\nI next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the complete 34GB file in 30 minutes (a good result.) I then compressed that with gzip, which took an hour and reduced the file to 32GB (hardly worth the effort) for a total run time of 90 minutes. In that instance, gzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0 to see how quickly that would dump the file. I had the idea that I'd go on to see how quickly gzip would compress it, but unfortunately it filled my disk before finishing (87GB at that point), so there's something worth knowing: pg_dump's output for binary data is very much less compact than COPY WITH BINARY; all those backslashes, as Tom pointed out. For the aforementioned reason, I didn't get to see how gzip would perform. For the record, pg_dump with no compression produced output at the rate of 26MB/second; a rather meaningless number given the 200%+ expansion of final output.\n\nI am now confident the performance problem is from gzip, not Postgres and wonder if I should read up on gzip to find why it would work so slowly on a pure text stream, albeit a representation of PDF which intrinsically is fairly compressed. Given the spectacular job that postgres did in adjusting it's rate of output to match the consumer process, I did wonder if there might have been a tragic interaction between postgres and gzip; perhaps postgres limits its rate of output to match gzip; and gzip tries to compress what's available, that being only a few bytes; and perhaps that might be so inefficient that it hogs the CPU; but it don't think that likely. I had a peek at gzip's source (surprisingly readable) and on first blush it does seem that unfortunate input could result in only a few bytes being written each time through the loop, meaning only a few more bytes could be read in.\n\nJust to complete the report, I created a child table to hold the PDF's, which are static, and took a dump of just that table, and adjusted my backup command to exclude it. Total size of compressed back sans PDFs circa 7MB taking around 30 seconds.",
"msg_date": "Sun, 21 Mar 2010 09:33:35 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> On 21/03/2010 9:17 PM, David Newall wrote:\n>> and wonder if I should read up on gzip to find why it would work so\n>> slowly on a pure text stream, albeit a representation of PDF which\n>> intrinsically is fairly compressed.\n\n> In fact, PDF uses deflate compression, the same algorithm used for gzip. \n> Gzip-compressing PDF is almost completely pointless -\n\nYeah. I would bet that the reason for the slow throughput is that gzip\nis fruitlessly searching for compressible sequences. It won't find many.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Mar 2010 11:39:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow "
},
{
"msg_contents": "Tom Lane wrote:\n> I would bet that the reason for the slow throughput is that gzip\n> is fruitlessly searching for compressible sequences. It won't find many.\n> \n\n\nIndeed, I didn't expect much reduction in size, but I also didn't expect \na four-order of magnitude increase in run-time (i.e. output at \n10MB/second going down to 500KB/second), particularly as my estimate was \nbased on gzipping a previously gzipped file. I think it's probably \npathological data, as it were. Might even be of interest to gzip's \nmaintainers.\n",
"msg_date": "Mon, 22 Mar 2010 02:20:34 +1030",
"msg_from": "David Newall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "If you have a multi-processor machine (more than 2) you could look into pigz, which is a parallelized implementation of gzip. I gotten dramatic reductions in wall time using it to zip dump files. The compressed file is readable by ungzip.\n\nBob Lunney\n\nFrom: Dave Crooke <[email protected]>\nSubject: Re: [PERFORM] pg_dump far too slow\nTo: \"David Newall\" <[email protected]>\nCc: \"Tom Lane\" <[email protected]>, [email protected], [email protected]\nDate: Sunday, March 21, 2010, 10:33 AM\n\nOne more from me ....\n\nIf you think that the pipe to GZIP may be causing pg_dump to stall, try putting something like buffer(1) in the pipeline ... it doesn't generally come with Linux, but you can download source or create your own very easily ... all it needs to do is asynchronously poll stdin and write stdout. I wrote one in Perl when I used to do a lot of digital video hacking, and it helped with chaining together tools like mplayer and mpeg.\n\n\nHowever, my money says that Tom's point about it being (disk) I/O bound is correct :-)\n\nCheers\nDave\n\nOn Sun, Mar 21, 2010 at 8:17 AM, David Newall <[email protected]> wrote:\n\nThanks for all of the suggestions, guys, which gave me some pointers on new directions to look, and I learned some interesting things.\n\n\n\n\nThe first interesting thing was that piping (uncompressed) pg_dump into gzip, instead of using pg_dump's internal compressor, does bring a lot of extra parallelism into play. (Thank you, Matthew Wakeling.) I observed gzip using 100% CPU, as expected, and also two, count them, two postgres processes collecting data, each consuming a further 80% CPU. It seemed to me that Postgres was starting and stopping these to match the capacity of the consumer (i.e. pg_dump and gzip.) Very nice. Unfortunately one of these processes dropped eventually, and, according to top, the only non-idle process running was gzip (100%.) Obviously there were postgress and pg_dump processes, too, but they were throttled by gzip's rate of output and effectively idle (less than 1% CPU). That is also interesting. The final output from gzip was being produced at the rate of about 0.5MB/second, which seems almost unbelievably slow.\n\n\n\n\nI next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the complete 34GB file in 30 minutes (a good result.) I then compressed that with gzip, which took an hour and reduced the file to 32GB (hardly worth the effort) for a total run time of 90 minutes. In that instance, gzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0 to see how quickly that would dump the file. I had the idea that I'd go on to see how quickly gzip would compress it, but unfortunately it filled my disk before finishing (87GB at that point), so there's something worth knowing: pg_dump's output for binary data is very much less compact than COPY WITH BINARY; all those backslashes, as Tom pointed out. For the aforementioned reason, I didn't get to see how gzip would perform. For the record, pg_dump with no compression produced output at the rate of 26MB/second; a rather meaningless number given the 200%+ expansion of final output.\n\n\n\n\nI am now confident the performance problem is from gzip, not Postgres and wonder if I should read up on gzip to find why it would work so slowly on a pure text stream, albeit a representation of PDF which intrinsically is fairly compressed. Given the spectacular job that postgres did in adjusting it's rate of output to match the consumer process, I did wonder if there might have been a tragic interaction between postgres and gzip; perhaps postgres limits its rate of output to match gzip; and gzip tries to compress what's available, that being only a few bytes; and perhaps that might be so inefficient that it hogs the CPU; but it don't think that likely. I had a peek at gzip's source (surprisingly readable) and on first blush it does seem that unfortunate input could result in only a few bytes being written each time through the loop, meaning only a few more bytes could be read in.\n\n\n\n\nJust to complete the report, I created a child table to hold the PDF's, which are static, and took a dump of just that table, and adjusted my backup command to exclude it. Total size of compressed back sans PDFs circa 7MB taking around 30 seconds.\n\n\n\n\n\n\n\n \nIf you have a multi-processor machine (more than 2) you could look into pigz, which is a parallelized implementation of gzip. I gotten dramatic reductions in wall time using it to zip dump files. The compressed file is readable by ungzip.Bob LunneyFrom: Dave Crooke <[email protected]>Subject: Re: [PERFORM] pg_dump far too slowTo: \"David Newall\" <[email protected]>Cc: \"Tom Lane\" <[email protected]>, [email protected], [email protected]: Sunday, March 21, 2010, 10:33 AMOne more from me ....If you think that the pipe to GZIP may be causing pg_dump to stall, try putting something like buffer(1) in the pipeline ... it doesn't generally come with\n Linux, but you can download source or create your own very easily ... all it needs to do is asynchronously poll stdin and write stdout. I wrote one in Perl when I used to do a lot of digital video hacking, and it helped with chaining together tools like mplayer and mpeg.\nHowever, my money says that Tom's point about it being (disk) I/O bound is correct :-)CheersDaveOn Sun, Mar 21, 2010 at 8:17 AM, David Newall <[email protected]> wrote:\nThanks for all of the suggestions, guys, which gave me some pointers on new directions to look, and I learned some interesting things.\n\nThe first interesting thing was that piping (uncompressed) pg_dump into gzip, instead of using pg_dump's internal compressor, does bring a lot of extra parallelism into play. (Thank you, Matthew Wakeling.) I observed gzip using 100% CPU, as expected, and also two, count them, two postgres processes collecting data, each consuming a further 80% CPU. It seemed to me that Postgres was starting and stopping these to match the capacity of the consumer (i.e. pg_dump and gzip.) Very nice. Unfortunately one of these processes dropped eventually, and, according to top, the only non-idle process running was gzip (100%.) Obviously there were postgress and pg_dump processes, too, but they were throttled by gzip's rate of output and effectively idle (less than 1% CPU). That is also interesting. The final output from gzip was being produced at the rate of about 0.5MB/second, which seems almost unbelievably slow.\n\nI next tried Tom Lane's suggestion, COPY WITH BINARY, which produced the complete 34GB file in 30 minutes (a good result.) I then compressed that with gzip, which took an hour and reduced the file to 32GB (hardly worth the effort) for a total run time of 90 minutes. In that instance, gzip produced output at the rate of 10MB/second, so I tried pg_dump -Z0 to see how quickly that would dump the file. I had the idea that I'd go on to see how quickly gzip would compress it, but unfortunately it filled my disk before finishing (87GB at that point), so there's something worth knowing: pg_dump's output for binary data is very much less compact than COPY WITH BINARY; all those backslashes, as Tom pointed out. For the aforementioned reason, I didn't get to see how gzip would perform. For the record, pg_dump with no compression produced output at the rate of 26MB/second; a rather meaningless number given the 200%+ expansion of final\n output.\n\nI am now confident the performance problem is from gzip, not Postgres and wonder if I should read up on gzip to find why it would work so slowly on a pure text stream, albeit a representation of PDF which intrinsically is fairly compressed. Given the spectacular job that postgres did in adjusting it's rate of output to match the consumer process, I did wonder if there might have been a tragic interaction between postgres and gzip; perhaps postgres limits its rate of output to match gzip; and gzip tries to compress what's available, that being only a few bytes; and perhaps that might be so inefficient that it hogs the CPU; but it don't think that likely. I had a peek at gzip's source (surprisingly readable) and on first blush it does seem that unfortunate input could result in only a few bytes being written each time through the loop, meaning only a few more bytes could be read in.\n\nJust to complete the report, I created a child table to hold the PDF's, which are static, and took a dump of just that table, and adjusted my backup command to exclude it. Total size of compressed back sans PDFs circa 7MB taking around 30 seconds.",
"msg_date": "Sun, 21 Mar 2010 11:03:30 -0700 (PDT)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
},
{
"msg_contents": "On Mar 21, 2010, at 8:50 AM, David Newall wrote:\n\n> Tom Lane wrote:\n>> I would bet that the reason for the slow throughput is that gzip\n>> is fruitlessly searching for compressible sequences. It won't find many.\n>> \n> \n> \n> Indeed, I didn't expect much reduction in size, but I also didn't expect \n> a four-order of magnitude increase in run-time (i.e. output at \n> 10MB/second going down to 500KB/second), particularly as my estimate was \n> based on gzipping a previously gzipped file. I think it's probably \n> pathological data, as it were. Might even be of interest to gzip's \n> maintainers.\n> \n\ngzip -9 is known to be very very inefficient. It hardly ever is more compact than -7, and often 2x slower or worse.\nIts almost never worth it to use unless you don't care how long the compression time is.\n\nTry -Z1\n\nat level 1 compression the output will often be good enough compression at rather fast speeds. It is about 6x as fast as gzip -9 and typically creates result files 10% larger.\n\nFor some compression/decompression speed benchmarks see:\n\nhttp://tukaani.org/lzma/benchmarks.html\n\n\n\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 26 Mar 2010 17:06:07 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump far too slow"
}
] |
[
{
"msg_contents": "Hi people,\n\nThe whole topic of messing with stats makes my head spin but I am concerned \nabout some horridly performing queries that have had bad rows estimates and \nothers which always choose seq scans when indexes are available. Reading up \non how to improve planner estimates, I have seen references to \ndefault_statistics_target being changed from the default of 10 to 100.\n\nOur DB is large, with thousands of tables, but the core schema has about 100 \ntables and the typical row counts are in the millions of rows for the whole \ntable. We have been playing endless games with tuning this server - but with \nall of the suggestions, I don't think the issue of changing \ndefault_statistics_target has ever come up. Realizing that there is a \nperformance hit associated with ANALYZE, are there any other downsides to \nincreasing this value to 100, and is this a common setting for large DBs?\n\nThanks,\n\nCarlo \n\n",
"msg_date": "Sun, 14 Mar 2010 19:27:57 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "default_statistics_target"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> The whole topic of messing with stats makes my head spin but I am concerned \n> about some horridly performing queries that have had bad rows estimates and \n> others which always choose seq scans when indexes are available. Reading up \n> on how to improve planner estimates, I have seen references to \n> default_statistics_target being changed from the default of 10 to 100.\n> \n> Our DB is large, with thousands of tables, but the core schema has about 100 \n> tables and the typical row counts are in the millions of rows for the whole \n> table. We have been playing endless games with tuning this server - but with \n> all of the suggestions, I don't think the issue of changing \n> default_statistics_target has ever come up. Realizing that there is a \n> performance hit associated with ANALYZE, are there any other downsides to \n> increasing this value to 100, and is this a common setting for large DBs?\n\n From PostgreSQL 8.3 to 8.4, the default value for default_statistics_target\nhas changed from 10 to 100. I would take that as a very strong indication\nthat 100 is preceived to be a reasonable value by many knowlegdable people.\n\nHigh values of that parameter are advisable if good performance of\nnontrivial queries is the most important thing in your database\n(like in a data warehouse) and the cost of ANALYZE is only secondary.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Mon, 15 Mar 2010 09:33:00 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: default_statistics_target"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> The whole topic of messing with stats makes my head spin but I am \n> concerned about some horridly performing queries that have had bad \n> rows estimates and others which always choose seq scans when indexes \n> are available. Reading up on how to improve planner estimates, I have \n> seen references to default_statistics_target being changed from the \n> default of 10 to 100.\n>\n> Our DB is large, with thousands of tables\n\nStop right there for a second. Are you sure autovacuum is working well \nhere? With thousands of tables, it wouldn't surprise me to discover \nyour planner estimates are wrong because there hasn't been a recent \nenough ANALYZE on the relevant tables. If you haven't already, take a \nlook at pg_stat_user_tables and make sure that tables that have the bad \nestimates have actually been analyzed recently. A look at the live/dead \nrow counts there should be helpful as well.\n\nIf all that's recent, but you're still getting bad estimates, only then \nwould I suggest trying an increase to default_statistics_target. In the \nsituation where autovacuum isn't keeping up with some tables because you \nhave thousands of them, increasing the stats target can actually make \nthe problem worse, because the tables that are getting analyzed will \ntake longer to process--more statistics work to be done per table.\n\nGiven that it looks like you're running 8.3 from past messages I've seen \nfrom you, I'd also be concerned that you've overrun your max_fsm_pages, \nso that VACUUM is growing increasing ineffective for you, and that's \ncontributing to your headache.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 15 Mar 2010 09:18:39 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: default_statistics_target"
},
{
"msg_contents": "HI Greg,\n\n\nThanks for the insight. How much more of a server's resources will be \nconsumed by an ANALYZE with default_statistics_target = 100?\n\nWe have two environments hosting the same data. One is our \"live\" server, \nwhich serves the web site, and this hosts our published data, not more than \n200 - 300 tables.\n\nPRODUCTION: The data warehouse consisting of our published data, as well as \nour \"input resources\" which are transformed via ETL processes into our \npublished data. It is these \"input resources\" which currently consist of \nabout 8,000 tables and growing. Don't really require analysis, as they are \ntypically run once in a linear read when importing.they are typically read \nlinearly, and rarely more than once. They are kept for auditing and \nrollbacks.\n\nLIVE: Hosts just the published data, copied over from the production server. \nBecause the data does not get written to very often, older stats from \nANALYZE are likely to still be valid. Our concern is that with the older \nsetting of default_statistics_target = 10 it has not gone deep enough into \nthese tables (numbering in the millios of rows) to really represent the data \ndistribution properly.\n\n> Given that it looks like you're running 8.3 from past messages I've seen \n> from you, I'd also be concerned that you've overrun your max_fsm_pages, so \n> that VACUUM is growing increasing ineffective for you, and that's \n> contributing to your headache.\n\nBelow are the config values of our production server (those not listed are \nthose stubbed out) . Sadly, in an attempt to improve the server's \nperformance, someone wiped out all of the changes I had made to date, along \nwith comments indicating previous values, reason for the change, etc. What \ndo they call that again? Oh, yeah. Documentation.\n\n# CENTOS 5.4\n# Linux mdx_octo 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 \nx86_64 x86_64 GNU/Linux\n# pgsql 8.3.10, 8 CPUs, 48GB RAM\n# RAID 10, 4 Disks\nautovacuum = on # Enable autovacuum subprocess? 'on'\nautovacuum_analyze_scale_factor = 0.05 # fraction of table size before \nanalyze\nautovacuum_analyze_threshold = 1000\nautovacuum_naptime = 1min # time between autovacuum runs\nautovacuum_vacuum_cost_delay = 50 # default vacuum cost delay for\nautovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum\nautovacuum_vacuum_threshold = 1000\nbgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\ncheckpoint_segments = 128 # in logfile segments, min 1, 16MB each\ncheckpoint_warning = 290s # 0 is off\nclient_min_messages = debug1 # values in order of decreasing detail:\ndatestyle = 'iso, mdy'\ndefault_statistics_target = 250 # range 1-1000\ndefault_text_search_config = 'pg_catalog.english'\nlc_messages = 'C' # locale for system error message\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\nlisten_addresses = '*' # what IP address(es) to listen on;\nlog_destination = 'stderr' # Valid values are combinations of\nlog_error_verbosity = verbose # terse, default, or verbose messages\nlog_line_prefix = '%t ' # special values:\nlog_min_error_statement = debug1 # values in order of decreasing detail:\nlog_min_messages = debug1 # values in order of decreasing detail:\nlogging_collector = on # Enable capturing of stderr and csvlog\nmaintenance_work_mem = 256MB\nmax_connections = 100 # (change requires restart)\nmax_fsm_relations = 1000 # min 100, ~70 bytes each\nmax_locks_per_transaction = 128 # min 10\nport = 5432 # (change requires restart)\nshared_buffers = 4096MB\nshared_preload_libraries = '$libdir/plugins/plugin_debugger.so' # (change \nrequires restart)\ntrack_counts = on\nvacuum_cost_delay = 5 # 0-1000 milliseconds\nwal_buffers = 4MB\nwal_sync_method = open_sync\nwork_mem = 64MB\n\nCarlo\n\n\n\"Greg Smith\" <[email protected]> wrote in message \nnews:[email protected]...\n> Carlo Stonebanks wrote:\n>> The whole topic of messing with stats makes my head spin but I am \n>> concerned about some horridly performing queries that have had bad rows \n>> estimates and others which always choose seq scans when indexes are \n>> available. Reading up on how to improve planner estimates, I have seen \n>> references to default_statistics_target being changed from the default of \n>> 10 to 100.\n>>\n>> Our DB is large, with thousands of tables\n>\n> Stop right there for a second. Are you sure autovacuum is working well \n> here? With thousands of tables, it wouldn't surprise me to discover your \n> planner estimates are wrong because there hasn't been a recent enough \n> ANALYZE on the relevant tables. If you haven't already, take a look at \n> pg_stat_user_tables and make sure that tables that have the bad estimates \n> have actually been analyzed recently. A look at the live/dead row counts \n> there should be helpful as well.\n>\n> If all that's recent, but you're still getting bad estimates, only then \n> would I suggest trying an increase to default_statistics_target. In the \n> situation where autovacuum isn't keeping up with some tables because you \n> have thousands of them, increasing the stats target can actually make the \n> problem worse, because the tables that are getting analyzed will take \n> longer to process--more statistics work to be done per table.\n>\n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n",
"msg_date": "Mon, 22 Mar 2010 18:19:11 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: default_statistics_target"
},
{
"msg_contents": "On Mon, Mar 22, 2010 at 6:19 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Thanks for the insight. How much more of a server's resources will be\n> consumed by an ANALYZE with default_statistics_target = 100?\n\nI don't think it will be much of a problem, especially since\nautovacuum will do only the tables that need it and not all the same\ntime. But you can certainly try it. Before changing the global\nsetting, try just changing it for one session with SET:\n\n\\timing\nANALYZE <some table>;\nSET default_statistics_target = 100;\nANALYZE <same table>;\n\\q\n\n...Robert\n",
"msg_date": "Thu, 25 Mar 2010 11:44:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: default_statistics_target"
}
] |
[
{
"msg_contents": "As part of my PG East prep work, I just did a big overhaul of the \nvarious benchmarking information available on the wiki and have tagged \nall the relevant links into a separate category: \nhttp://wiki.postgresql.org/wiki/Category:Benchmarking\n\nSeveral of those were really difficult to find pages before; a couple of \nthem were linked to and needed but not created yet; and a few pages are \nnew ones I just wrote starter content for (TPC-H and SysBench are both \ncovered now). There's enough new and formerly difficult to find pages \nthere now I felt it was worth mentioning here.\n\nhttp://wiki.postgresql.org/wiki/Performance_Optimization was also \ngetting a bit large, so I removed the duplicated links from there. \nCreated a new page to hold all the database hardware links too.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 15 Mar 2010 04:20:21 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Updated benchmarking category on the wiki"
}
] |
[
{
"msg_contents": "\nHow to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it\nbe made without shutting the server? If so, then what will be performance\ndegradation percentage?\n-- \nView this message in context: http://old.nabble.com/Postgres-DB-maintainenance---vacuum-and-reindex-tp27913694p27913694.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Mon, 15 Mar 2010 22:30:49 -0700 (PDT)",
"msg_from": "Meena_Ramkumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres DB maintainenance - vacuum and reindex"
},
{
"msg_contents": "On Mon, Mar 15, 2010 at 11:30 PM, Meena_Ramkumar\n<[email protected]> wrote:\n>\n> How to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it\n> be made without shutting the server? If so, then what will be performance\n> degradation percentage?\n\nvacuum can be tuned by the various vacuum_* parameters in the\npostgresql.conf file to have little or no impact on other processes\nrunning. Depending on your IO subsystem, you can tune it up or down\nto fit your needs (speed versus impact on other processes). reindex\nhowever tends to be more intrusive to the system, and may cause some\nperformance degradation, which will be very dependent on your IO\nsubsystem (i.e. a single 7200RPM SATA drive system is more likely to\nnotice and be slowed down by reindexing than a 48 disk 15krpm SAS\nRAID-10 array.\n\nThe more important question is what problem are you trying to solve,\nand are there other, better approaches than the ones you're trying.\nWithout more info, no one can really say.\n",
"msg_date": "Tue, 16 Mar 2010 10:58:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres DB maintainenance - vacuum and reindex"
},
{
"msg_contents": "Autovacuum is your friend for minimal downtime. It is configurable to let you adjust how invasive it will be, and you can have different settings per table if you wish.\n\nAs for the reindex, why do you think you will be reindexing regularly?\n\nOn Mar 15, 2010, at 10:30 PM, Meena_Ramkumar wrote:\n\n> \n> How to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it\n> be made without shutting the server? If so, then what will be performance\n> degradation percentage?\n> -- \n> View this message in context: http://old.nabble.com/Postgres-DB-maintainenance---vacuum-and-reindex-tp27913694p27913694.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 16 Mar 2010 10:06:41 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres DB maintainenance - vacuum and reindex"
},
{
"msg_contents": "Meena_Ramkumar escribi�:\n> How to run vacuumdb and reindex for Postgres DB in a non-stop server? Will it\n> be made without shutting the server? If so, then what will be performance\n> degradation percentage?\n> \nTo execute vacuum, you can�t stop the server, is another process of it.\nIf you are using a recent version of PostgreSQL, you can use autovacuum \non the server and this process is charged of this or to use VACUUM with \nthe right schedule. You should avoid to use VACUUM FULL, because is very \nslow and it requires exclusive locks of the tables that you are \nexecuting this, and it reduces the table size on the disc but It doesn�t \nreduce the index size, but iit can make indexes larger.\n\nWith autovacuum = on, you can avoid to use VACUUM frecuently\n\nThe performance degradation depends of the quantity of tables and \ndatabases that you have on your server.\n\nREINDEX is another task that you can execute periodicly on you server, \nbut if you don�t want to affect the production task, the best thing yo \ndo is to drop the index and reissue the CREATE INDEX CONCURRENTLY command.\n\nRegards\n\n\n-- \n-------------------------------------------------------- \n-- Ing. Marcos Lu�s Ort�z Valmaseda --\n-- Twitter: http://twitter.com/@marcosluis2186 --\n-- FreeBSD Fan/User --\n-- http://www.freebsd.org/es --\n-- Linux User # 418229 --\n-- Database Architect/Administrator --\n-- PostgreSQL RDBMS --\n-- http://www.postgresql.org --\n-- http://planetpostgresql.org --\n-- http://www.postgresql-es.org --\n--------------------------------------------------------\n-- Data WareHouse -- Business Intelligence Apprentice --\n-- http://www.tdwi.org --\n-------------------------------------------------------- \n-- Ruby on Rails Fan/Developer --\n-- http://rubyonrails.org --\n--------------------------------------------------------\n\nComunidad T�cnica Cubana de PostgreSQL\nhttp://postgresql.uci.cu\nhttp://personas.grm.uci.cu/+marcos \n\nCentro de Tecnolog�as de Gesti�n de Datos (DATEC) \nContacto: \n Correo: [email protected] \n Telf: +53 07-837-3737 \n +53 07-837-3714 \nUniversidad de las Ciencias Inform�ticas \nhttp://www.uci.cu \n\n\n\n",
"msg_date": "Tue, 16 Mar 2010 13:29:15 -0400",
"msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres DB maintainenance - vacuum and reindex"
}
] |
[
{
"msg_contents": "I agree with Tom, any reordering attempt is at best second guessing the\nfilesystem and underlying storage.\n\nHowever, having the ability to control the extent size would be a worthwhile\nimprovement for systems that walk and chew gum (write to lots of tables)\nconcurrently.\n\nI'm thinking of Oracle's AUTOEXTEND settings for tablespace datafiles .... I\nthink the ideal way to do it for PG would be to make the equivalent\nconfigurable in postgresql.conf system wide, and allow specific per-table\nsettings in the SQL metadata, similar to auto-vacuum.\n\nAn awesomely simple alternative is to just specify the extension as e.g. 5%\nof the existing table size .... it starts by adding one block at a time for\ntiny tables, and once your table is over 20GB, it ends up adding a whole 1GB\nfile and pre-allocating it. Very little wasteage.\n\nCheers\nDave\n\nOn Tue, Mar 16, 2010 at 4:49 PM, Alvaro Herrera\n<[email protected]>wrote:\n\n> Tom Lane escribió:\n> > Alvaro Herrera <[email protected]> writes:\n> > > Maybe it would make more sense to try to reorder the fsync calls\n> > > instead.\n> >\n> > Reorder to what, though? You still have the problem that we don't know\n> > much about the physical layout on-disk.\n>\n> Well, to block numbers as a first step.\n>\n> However, this reminds me that sometimes we take the block-at-a-time\n> extension policy too seriously. We had a customer that had a\n> performance problem because they were inserting lots of data to TOAST\n> tables, causing very frequent extensions. I kept wondering whether an\n> allocation policy that allocated several new blocks at a time could be\n> useful (but I didn't try it). This would also alleviate fragmentation,\n> thus helping the physical layout be more similar to logical block\n> numbers.\n>\n> --\n> Alvaro Herrera\n> http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n\nI agree with Tom, any reordering attempt is at best second guessing the filesystem and underlying storage.However, having the ability to control the extent size would be a worthwhile improvement for systems that walk and chew gum (write to lots of tables) concurrently.\nI'm thinking of Oracle's AUTOEXTEND settings for tablespace datafiles .... I think the ideal way to do it for PG would be to make the equivalent configurable in postgresql.conf system wide, and allow specific per-table settings in the SQL metadata, similar to auto-vacuum.\nAn awesomely simple alternative is to just specify the extension as e.g. 5% of the existing table size .... it starts by adding one block at a time for tiny tables, and once your table is over 20GB, it ends up adding a whole 1GB file and pre-allocating it. Very little wasteage.\nCheersDaveOn Tue, Mar 16, 2010 at 4:49 PM, Alvaro Herrera <[email protected]> wrote:\nTom Lane escribió:\n> Alvaro Herrera <[email protected]> writes:\n> > Maybe it would make more sense to try to reorder the fsync calls\n> > instead.\n>\n> Reorder to what, though? You still have the problem that we don't know\n> much about the physical layout on-disk.\n\nWell, to block numbers as a first step.\n\nHowever, this reminds me that sometimes we take the block-at-a-time\nextension policy too seriously. We had a customer that had a\nperformance problem because they were inserting lots of data to TOAST\ntables, causing very frequent extensions. I kept wondering whether an\nallocation policy that allocated several new blocks at a time could be\nuseful (but I didn't try it). This would also alleviate fragmentation,\nthus helping the physical layout be more similar to logical block\nnumbers.\n\n--\nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support",
"msg_date": "Tue, 16 Mar 2010 18:58:50 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Block at a time ..."
},
{
"msg_contents": "Dave Crooke escribi�:\n\n> An awesomely simple alternative is to just specify the extension as e.g. 5%\n> of the existing table size .... it starts by adding one block at a time for\n> tiny tables, and once your table is over 20GB, it ends up adding a whole 1GB\n> file and pre-allocating it. Very little wasteage.\n\nI was thinking in something like that, except that the factor I'd use\nwould be something like 50% or 100% of current size, capped at (say) 1 GB.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 16 Mar 2010 21:14:05 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "> I was thinking in something like that, except that the factor I'd use\n> would be something like 50% or 100% of current size, capped at (say) 1 \n> GB.\n\nUsing fallocate() ?\n\n",
"msg_date": "Wed, 17 Mar 2010 08:32:09 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "On Wed, Mar 17, 2010 at 7:32 AM, Pierre C <[email protected]> wrote:\n>> I was thinking in something like that, except that the factor I'd use\n>> would be something like 50% or 100% of current size, capped at (say) 1 GB.\n\nThis turns out to be a bad idea. One of the first thing Oracle DBAs\nare told to do is change this default setting to allocate some\nreasonably large fixed size rather than scaling upwards.\n\nThis might be mostly due to Oracle's extent-based space management but\nI'm not so sure. Recall that the filesystem is probably doing some\nrounding itself. If you allocate 120kB it's probably allocating 128kB\nitself anyways. Having two layers rounding up will result in odd\nbehaviour.\n\nIn any case I was planning on doing this a while back. Then I ran some\nexperiments and couldn't actually demonstrate any problem. ext2 seems\nto do a perfectly reasonable job of avoiding this problem. All the\nfiles were mostly large contiguous blocks after running some tests --\nIIRC running pgbench.\n\n\n> Using fallocate() ?\n\nI think we need posix_fallocate().\n\n-- \ngreg\n",
"msg_date": "Wed, 17 Mar 2010 09:52:13 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> I think we need posix_fallocate().\n\nThe problem with posix_fallocate (other than questionable portability)\nis that it doesn't appear to guarantee anything at all about what is in\nthe space it allocates. Worst case, we might find valid-looking\nPostgres data there (eg, because a block was recycled from some recently\ndropped table). If we have to write something anyway to zero the space,\nwhat's the point?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Mar 2010 10:27:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ... "
},
{
"msg_contents": "Greg - with Oracle, I always do fixed 2GB dbf's for poartability, and\npreallocate the whole file in advance. However, the situation is a bit\ndifferent in that Oracle will put blocks from multiple tables and indexes in\na DBF if you don't tell it differently.\n\nTom - I'm not sure what Oracle does, but it literally writes the whole\nextent before using it .... I think they are just doing the literal\nequivalent of *dd if=/dev/zero* ... it takes several seconds to prep a 2GB\nfile on decent storage.\n\nCheers\nDave\n\nOn Wed, Mar 17, 2010 at 9:27 AM, Tom Lane <[email protected]> wrote:\n\n> Greg Stark <[email protected]> writes:\n> > I think we need posix_fallocate().\n>\n> The problem with posix_fallocate (other than questionable portability)\n> is that it doesn't appear to guarantee anything at all about what is in\n> the space it allocates. Worst case, we might find valid-looking\n> Postgres data there (eg, because a block was recycled from some recently\n> dropped table). If we have to write something anyway to zero the space,\n> what's the point?\n>\n> regards, tom lane\n>\n\nGreg - with Oracle, I always do fixed 2GB dbf's for poartability, and preallocate the whole file in advance. However, the situation is a bit different in that Oracle will put blocks from multiple tables and indexes in a DBF if you don't tell it differently.\nTom - I'm not sure what Oracle does, but it literally writes the whole extent before using it .... I think they are just doing the literal equivalent of dd if=/dev/zero ... it takes several seconds to prep a 2GB file on decent storage.\nCheersDaveOn Wed, Mar 17, 2010 at 9:27 AM, Tom Lane <[email protected]> wrote:\nGreg Stark <[email protected]> writes:\n> I think we need posix_fallocate().\n\nThe problem with posix_fallocate (other than questionable portability)\nis that it doesn't appear to guarantee anything at all about what is in\nthe space it allocates. Worst case, we might find valid-looking\nPostgres data there (eg, because a block was recycled from some recently\ndropped table). If we have to write something anyway to zero the space,\nwhat's the point?\n\n regards, tom lane",
"msg_date": "Wed, 17 Mar 2010 10:16:00 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "On 3/17/10 2:52 AM, Greg Stark wrote:\n> On Wed, Mar 17, 2010 at 7:32 AM, Pierre C<[email protected]> wrote:\n>>> I was thinking in something like that, except that the factor I'd use\n>>> would be something like 50% or 100% of current size, capped at (say) 1 GB.\n>\n> This turns out to be a bad idea. One of the first thing Oracle DBAs\n> are told to do is change this default setting to allocate some\n> reasonably large fixed size rather than scaling upwards.\n>\n> This might be mostly due to Oracle's extent-based space management but\n> I'm not so sure. Recall that the filesystem is probably doing some\n> rounding itself. If you allocate 120kB it's probably allocating 128kB\n> itself anyways. Having two layers rounding up will result in odd\n> behaviour.\n>\n> In any case I was planning on doing this a while back. Then I ran some\n> experiments and couldn't actually demonstrate any problem. ext2 seems\n> to do a perfectly reasonable job of avoiding this problem. All the\n> files were mostly large contiguous blocks after running some tests --\n> IIRC running pgbench.\n\nThis is one of the more-or-less solved problems in Unix/Linux. Ext* file systems have a \"reserve\" usually of 10% of the disk space that nobody except root can use. It's not for root, it's because with 10% of the disk free, you can almost always do a decent job of allocating contiguous blocks and get good performance. Unless Postgres has some weird problem that Linux has never seen before (and that wouldn't be unprecedented...), there's probably no need to fool with file-allocation strategies.\n\nCraig\n",
"msg_date": "Wed, 17 Mar 2010 09:41:44 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "Greg is correct, as usual. Geometric growth of files is A Bad Thing in an Oracle DBA's world, since you can unexpectedly (automatically?) run out of file system space when the database determines it needs x% more extents than last time.\n\nThe concept of contiguous extents, however, has some merit, particularly when restoring databases. Prior to parallel restore, a table's files were created and extended in roughly contiguous allocations, presuming there was no other activity on your database disks. (You do dedicate disks, don't you?) When using 8-way parallel restore against a six-disk RAID 10 group I found that table and index scan performance dropped by about 10x. I/O performance was restored by either clustering the tables one at a time, or by dropping and restoring them one at a time. The only reason I can come up with for this behavior is file fragmentation and increased seek times.\n\nIf PostgreSQL had a mechanism to pre-allocate files prior to restoring the database that might mitigate the problem. \n\nThen if we could only get parallel index operations ...\n\nBob Lunney\n\n--- On Wed, 3/17/10, Greg Stark <[email protected]> wrote:\n\n> From: Greg Stark <[email protected]>\n> Subject: Re: [PERFORM] Block at a time ...\n> To: \"Pierre C\" <[email protected]>\n> Cc: \"Alvaro Herrera\" <[email protected]>, \"Dave Crooke\" <[email protected]>, [email protected]\n> Date: Wednesday, March 17, 2010, 5:52 AM\n> On Wed, Mar 17, 2010 at 7:32 AM,\n> Pierre C <[email protected]>\n> wrote:\n> >> I was thinking in something like that, except that\n> the factor I'd use\n> >> would be something like 50% or 100% of current\n> size, capped at (say) 1 GB.\n> \n> This turns out to be a bad idea. One of the first thing\n> Oracle DBAs\n> are told to do is change this default setting to allocate\n> some\n> reasonably large fixed size rather than scaling upwards.\n> \n> This might be mostly due to Oracle's extent-based space\n> management but\n> I'm not so sure. Recall that the filesystem is probably\n> doing some\n> rounding itself. If you allocate 120kB it's probably\n> allocating 128kB\n> itself anyways. Having two layers rounding up will result\n> in odd\n> behaviour.\n> \n> In any case I was planning on doing this a while back. Then\n> I ran some\n> experiments and couldn't actually demonstrate any problem.\n> ext2 seems\n> to do a perfectly reasonable job of avoiding this problem.\n> All the\n> files were mostly large contiguous blocks after running\n> some tests --\n> IIRC running pgbench.\n> \n> \n> > Using fallocate() ?\n> \n> I think we need posix_fallocate().\n> \n> -- \n> greg\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n",
"msg_date": "Wed, 17 Mar 2010 10:02:03 -0700 (PDT)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "\nOn Mar 17, 2010, at 9:41 AM, Craig James wrote:\n\n> On 3/17/10 2:52 AM, Greg Stark wrote:\n>> On Wed, Mar 17, 2010 at 7:32 AM, Pierre C<[email protected]> wrote:\n>>>> I was thinking in something like that, except that the factor I'd use\n>>>> would be something like 50% or 100% of current size, capped at (say) 1 GB.\n>> \n>> This turns out to be a bad idea. One of the first thing Oracle DBAs\n>> are told to do is change this default setting to allocate some\n>> reasonably large fixed size rather than scaling upwards.\n>> \n>> This might be mostly due to Oracle's extent-based space management but\n>> I'm not so sure. Recall that the filesystem is probably doing some\n>> rounding itself. If you allocate 120kB it's probably allocating 128kB\n>> itself anyways. Having two layers rounding up will result in odd\n>> behaviour.\n>> \n>> In any case I was planning on doing this a while back. Then I ran some\n>> experiments and couldn't actually demonstrate any problem. ext2 seems\n>> to do a perfectly reasonable job of avoiding this problem. All the\n>> files were mostly large contiguous blocks after running some tests --\n>> IIRC running pgbench.\n> \n> This is one of the more-or-less solved problems in Unix/Linux. Ext* file systems have a \"reserve\" usually of 10% of the disk space that nobody except root can use. It's not for root, it's because with 10% of the disk free, you can almost always do a decent job of allocating contiguous blocks and get good performance. Unless Postgres has some weird problem that Linux has never seen before (and that wouldn't be unprecedented...), there's probably no need to fool with file-allocation strategies.\n> \n> Craig\n> \n\nIts fairly easy to break. Just do a parallel import with say, 16 concurrent tables being written to at once. Result? Fragmented tables.\n\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 22 Mar 2010 11:47:43 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "\n>> This is one of the more-or-less solved problems in Unix/Linux. Ext* \n>> file systems have a \"reserve\" usually of 10% of the disk space that \n>> nobody except root can use. It's not for root, it's because with 10% \n>> of the disk free, you can almost always do a decent job of allocating \n>> contiguous blocks and get good performance. Unless Postgres has some \n>> weird problem that Linux has never seen before (and that wouldn't be \n>> unprecedented...), there's probably no need to fool with \n>> file-allocation strategies.\n>>\n>> Craig\n>\n> Its fairly easy to break. Just do a parallel import with say, 16 \n> concurrent tables being written to at once. Result? Fragmented tables.\n\nDelayed allocation (ext4, XFS) helps a lot for concurrent writing at a \nmedium-high rate (a few megabytes per second and up) when lots of data can \nsit in the cache and be flushed/allocated as big contiguous chunks. I'm \npretty sure ext4/XFS would pass your parallel import test.\n\nHowever if you have files like tables (and indexes) or logs that grow \nslowly over time (something like a few megabytes per hour or less), after \na few days/weeks/months, horrible fragmentation is an almost guaranteed \nresult on many filesystems (NTFS being perhaps the absolute worst).\n\n",
"msg_date": "Mon, 22 Mar 2010 21:55:27 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "This is why pre-allocation is a good idea if you have the space ....\n\nTom, what about a really simple command in a forthcoming release of PG that\nwould just preformat a 1GB file at a time? This is what I've always done\nscripted with Oracle (ALTER TABLESPACE foo ADD DATAFILE ....) rather than\nrelying on its autoextender when performance has been a concern.\n\nCheers\nDave\n\nOn Mon, Mar 22, 2010 at 3:55 PM, Pierre C <[email protected]> wrote:\n\n>\n> This is one of the more-or-less solved problems in Unix/Linux. Ext* file\n>>> systems have a \"reserve\" usually of 10% of the disk space that nobody except\n>>> root can use. It's not for root, it's because with 10% of the disk free,\n>>> you can almost always do a decent job of allocating contiguous blocks and\n>>> get good performance. Unless Postgres has some weird problem that Linux has\n>>> never seen before (and that wouldn't be unprecedented...), there's probably\n>>> no need to fool with file-allocation strategies.\n>>>\n>>> Craig\n>>>\n>>\n>> Its fairly easy to break. Just do a parallel import with say, 16\n>> concurrent tables being written to at once. Result? Fragmented tables.\n>>\n>\n> Delayed allocation (ext4, XFS) helps a lot for concurrent writing at a\n> medium-high rate (a few megabytes per second and up) when lots of data can\n> sit in the cache and be flushed/allocated as big contiguous chunks. I'm\n> pretty sure ext4/XFS would pass your parallel import test.\n>\n> However if you have files like tables (and indexes) or logs that grow\n> slowly over time (something like a few megabytes per hour or less), after a\n> few days/weeks/months, horrible fragmentation is an almost guaranteed result\n> on many filesystems (NTFS being perhaps the absolute worst).\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis is why pre-allocation is a good idea if you have the space .... Tom, what about a really simple command in a forthcoming release of PG that would just preformat a 1GB file at a time? This is what I've always done scripted with Oracle (ALTER TABLESPACE foo ADD DATAFILE ....) rather than relying on its autoextender when performance has been a concern.\nCheersDaveOn Mon, Mar 22, 2010 at 3:55 PM, Pierre C <[email protected]> wrote:\n\n\n\nThis is one of the more-or-less solved problems in Unix/Linux. Ext* file systems have a \"reserve\" usually of 10% of the disk space that nobody except root can use. It's not for root, it's because with 10% of the disk free, you can almost always do a decent job of allocating contiguous blocks and get good performance. Unless Postgres has some weird problem that Linux has never seen before (and that wouldn't be unprecedented...), there's probably no need to fool with file-allocation strategies.\n\nCraig\n\n\nIts fairly easy to break. Just do a parallel import with say, 16 concurrent tables being written to at once. Result? Fragmented tables.\n\n\nDelayed allocation (ext4, XFS) helps a lot for concurrent writing at a medium-high rate (a few megabytes per second and up) when lots of data can sit in the cache and be flushed/allocated as big contiguous chunks. I'm pretty sure ext4/XFS would pass your parallel import test.\n\nHowever if you have files like tables (and indexes) or logs that grow slowly over time (something like a few megabytes per hour or less), after a few days/weeks/months, horrible fragmentation is an almost guaranteed result on many filesystems (NTFS being perhaps the absolute worst).\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 22 Mar 2010 16:06:00 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "On Mon, Mar 22, 2010 at 6:47 PM, Scott Carey <[email protected]> wrote:\n> Its fairly easy to break. Just do a parallel import with say, 16 concurrent tables being written to at once. Result? Fragmented tables.\n>\n\nFwiw I did do some investigation about this at one point and could not\ndemonstrate any significant fragmentation. But that was on Linux --\ndifferent filesystem implementations would have different success\nrates. And there could be other factors as well such as how full the\nfileystem is or how old it is.\n\n-- \ngreg\n",
"msg_date": "Mon, 22 Mar 2010 21:25:07 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "On 3/22/10 11:47 AM, Scott Carey wrote:\n>\n> On Mar 17, 2010, at 9:41 AM, Craig James wrote:\n>\n>> On 3/17/10 2:52 AM, Greg Stark wrote:\n>>> On Wed, Mar 17, 2010 at 7:32 AM, Pierre C<[email protected]> wrote:\n>>>>> I was thinking in something like that, except that the factor I'd use\n>>>>> would be something like 50% or 100% of current size, capped at (say) 1 GB.\n>>>\n>>> This turns out to be a bad idea. One of the first thing Oracle DBAs\n>>> are told to do is change this default setting to allocate some\n>>> reasonably large fixed size rather than scaling upwards.\n>>>\n>>> This might be mostly due to Oracle's extent-based space management but\n>>> I'm not so sure. Recall that the filesystem is probably doing some\n>>> rounding itself. If you allocate 120kB it's probably allocating 128kB\n>>> itself anyways. Having two layers rounding up will result in odd\n>>> behaviour.\n>>>\n>>> In any case I was planning on doing this a while back. Then I ran some\n>>> experiments and couldn't actually demonstrate any problem. ext2 seems\n>>> to do a perfectly reasonable job of avoiding this problem. All the\n>>> files were mostly large contiguous blocks after running some tests --\n>>> IIRC running pgbench.\n>>\n>> This is one of the more-or-less solved problems in Unix/Linux. Ext* file systems have a \"reserve\" usually of 10% of the disk space that nobody except root can use. It's not for root, it's because with 10% of the disk free, you can almost always do a decent job of allocating contiguous blocks and get good performance. Unless Postgres has some weird problem that Linux has never seen before (and that wouldn't be unprecedented...), there's probably no need to fool with file-allocation strategies.\n>>\n>> Craig\n>>\n>\n> Its fairly easy to break. Just do a parallel import with say, 16 concurrent tables being written to at once. Result? Fragmented tables.\n\nIs this from real-life experience? With fragmentation, there's a point of diminishing return. A couple head-seeks now and then hardly matter. My recollection is that even when there are lots of concurrent processes running that are all making files larger and larger, the Linux file system still can do a pretty good job of allocating mostly-contiguous space. It doesn't just dumbly allocate from some list, but rather tries to allocate in a way that results in pretty good \"contiguousness\" (if that's a word).\n\nOn the other hand, this is just from reading discussion groups like this one over the last few decades, I haven't tried it...\n\nCraig\n",
"msg_date": "Mon, 22 Mar 2010 16:46:13 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
},
{
"msg_contents": "\nOn Mar 22, 2010, at 4:46 PM, Craig James wrote:\n\n> On 3/22/10 11:47 AM, Scott Carey wrote:\n>> \n>> On Mar 17, 2010, at 9:41 AM, Craig James wrote:\n>> \n>>> On 3/17/10 2:52 AM, Greg Stark wrote:\n>>>> On Wed, Mar 17, 2010 at 7:32 AM, Pierre C<[email protected]> wrote:\n>>>>>> I was thinking in something like that, except that the factor I'd use\n>>>>>> would be something like 50% or 100% of current size, capped at (say) 1 GB.\n>>>> \n>>>> This turns out to be a bad idea. One of the first thing Oracle DBAs\n>>>> are told to do is change this default setting to allocate some\n>>>> reasonably large fixed size rather than scaling upwards.\n>>>> \n>>>> This might be mostly due to Oracle's extent-based space management but\n>>>> I'm not so sure. Recall that the filesystem is probably doing some\n>>>> rounding itself. If you allocate 120kB it's probably allocating 128kB\n>>>> itself anyways. Having two layers rounding up will result in odd\n>>>> behaviour.\n>>>> \n>>>> In any case I was planning on doing this a while back. Then I ran some\n>>>> experiments and couldn't actually demonstrate any problem. ext2 seems\n>>>> to do a perfectly reasonable job of avoiding this problem. All the\n>>>> files were mostly large contiguous blocks after running some tests --\n>>>> IIRC running pgbench.\n>>> \n>>> This is one of the more-or-less solved problems in Unix/Linux. Ext* file systems have a \"reserve\" usually of 10% of the disk space that nobody except root can use. It's not for root, it's because with 10% of the disk free, you can almost always do a decent job of allocating contiguous blocks and get good performance. Unless Postgres has some weird problem that Linux has never seen before (and that wouldn't be unprecedented...), there's probably no need to fool with file-allocation strategies.\n>>> \n>>> Craig\n>>> \n>> \n>> Its fairly easy to break. Just do a parallel import with say, 16 concurrent tables being written to at once. Result? Fragmented tables.\n> \n> Is this from real-life experience? With fragmentation, there's a point of diminishing return. A couple head-seeks now and then hardly matter. My recollection is that even when there are lots of concurrent processes running that are all making files larger and larger, the Linux file system still can do a pretty good job of allocating mostly-contiguous space. It doesn't just dumbly allocate from some list, but rather tries to allocate in a way that results in pretty good \"contiguousness\" (if that's a word).\n> \n> On the other hand, this is just from reading discussion groups like this one over the last few decades, I haven't tried it...\n> \n\nWell how fragmented is too fragmented depends on the use case and the hardware capability. In real world use, which for me means about 20 phases of large bulk inserts a day and not a lot of updates or index maintenance, the system gets somewhat fragmented but its not too bad. I did a dump/restore in 8.4 with parallel restore and it was much slower than usual. I did a single threaded restore and it was much faster. The dev environments are on ext3 and we see this pretty clearly -- but poor OS tuning can mask it (readahead parameter not set high enough). This is CentOS 5.4/5.3, perhaps later kernels are better at scheduling file writes to avoid this. We also use the deadline scheduler which helps a lot on concurrent reads, but might be messing up concurrent writes.\nOn production with xfs this was also bad at first --- in fact worse because xfs's default 'allocsize' setting is 64k. So files were regularly fragmented in small multiples of 64k. Changing the 'allocsize' parameter to 80MB made the restore process produce files with fragment sizes of 80MB. 80MB is big for most systems, but this array does over 1000MB/sec sequential read at peak, and only 200MB/sec with moderate fragmentation.\nIt won't fail to allocate disk space due to any 'reservations' of the delayed allocation, it just means that it won't choose to create a new file or extent within 80MB of another file that is open unless it has to. This can cause performance problems if you have lots of small files, which is why the default is 64k.\n\n\n\n> Craig\n\n\n",
"msg_date": "Fri, 26 Mar 2010 17:19:12 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Block at a time ..."
}
] |
[
{
"msg_contents": "Lets say I have a large table bigTable to which I would like to add\ntwo btree indexes. Is there a more efficient way to create indexes\nthan:\nCREATE INDEX idx_foo on bigTable (foo);\nCREATE INDEX idx_baz on bigTable (baz);\nOr\nCREATE INDEX CONCURRENTLY idx_foo on bigTable (foo);\nCREATE INDEX CONCURRENTLY idx_baz on bigTable (baz);\n\nAre there any particular performance optimizations that would be in\nplay in such a scenario?\n\nAt a minimum I assume that if both of the commands were started at\nabout the same time they would each scan the table in the same\ndirection and whichever creation was slower would benefit from most of\nthe table data it needed being prepopulated in shared buffers. Is this\nthe case?\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Tue, 16 Mar 2010 18:04:41 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Building multiple indexes concurrently"
},
{
"msg_contents": "On Mar 16, 2010, at 6:04 PM, Rob Wultsch wrote:\n\n> Lets say I have a large table bigTable to which I would like to add\n> two btree indexes. Is there a more efficient way to create indexes\n> than:\n> CREATE INDEX idx_foo on bigTable (foo);\n> CREATE INDEX idx_baz on bigTable (baz);\n> Or\n> CREATE INDEX CONCURRENTLY idx_foo on bigTable (foo);\n> CREATE INDEX CONCURRENTLY idx_baz on bigTable (baz);\n> \n> Are there any particular performance optimizations that would be in\n> play in such a scenario?\n> \n> At a minimum I assume that if both of the commands were started at\n> about the same time they would each scan the table in the same\n> direction and whichever creation was slower would benefit from most of\n> the table data it needed being prepopulated in shared buffers. Is this\n> the case?\n\nThat sounds reasonable to me. You might also look at upping your maintenance_work_mem for your session, as well.",
"msg_date": "Tue, 16 Mar 2010 22:13:27 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "Rob Wultsch wrote:\n> Are there any particular performance optimizations that would be in\n> play in such a scenario?\n> \n\nYou'd want to increase maintenance_work_mem significantly, just for the \nsessions that are running these. Something like this:\n\n|SET maintenance_work_mem = '1GB';|\n\nI don't know if that's a huge or tiny number relative to total RAM in \nyour server, you get the idea though.\n\nAlso, you should have a larger than default value for \ncheckpoint_segments in advance of this. That you can't set per session, \nbut you can adjust the value in the postgresql.conf and request a \nconfiguration reload--don't actually need to disrupt server operation by \nrestarting to do it. This will work for that:\n\npg_ctl reload\n\n\n> At a minimum I assume that if both of the commands were started at\n> about the same time they would each scan the table in the same\n> direction and whichever creation was slower would benefit from most of\n> the table data it needed being prepopulated in shared buffers. Is this\n> the case?\n> \n\nThis might be optimistic; whether it will be the case depends a lot on \nhow large your shared_buffers and OS buffer cache are relative to the \ntable involved. To pick an extreme example to demonstrate what I mean, \nif shared_buffers is the common default of <32MB, your table is 1TB, and \nyou have a giant disk array that reads fast, it's not very likely that \nthe second scan is going to find anything of interest left behind by the \nfirst one. You could try and make some rough estimates of how long it \nwill take to fill your RAM with table data at the expected I/O rate and \nguess how likely overlap is.\n\nThere's a trade-off here, which is that in return for making it possible \nthe data you need to rebuild the index is more likely to be in RAM when \nyou need it by building two at once, the resulting indexes are likely to \nend up interleaved on disk as they are written out. If you're doing a \nlot of index scans, the increased seek penalties for that may ultimately \nmake you regret having combined the two. Really impossible to predict \nwhich approach is going to be better long term without gathering so much \ndata that you might as well try and benchmark it on a test system \ninstead if you can instead. I am not a big fan of presuming one can \npredict performance instead of measuring it for complicated cases.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 17 Mar 2010 06:11:29 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> Rob Wultsch wrote:\n>> At a minimum I assume that if both of the commands were started at\n>> about the same time they would each scan the table in the same\n>> direction and whichever creation was slower would benefit from most of\n>> the table data it needed being prepopulated in shared buffers. Is this\n>> the case?\n\n> This might be optimistic;\n\nNo, it's not optimistic in the least, at least not since we implemented\nsynchronized seqscans (in 8.3 or thereabouts).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Mar 2010 10:30:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently "
},
{
"msg_contents": "On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane <[email protected]> wrote:\n> Greg Smith <[email protected]> writes:\n>> Rob Wultsch wrote:\n>>> At a minimum I assume that if both of the commands were started at\n>>> about the same time they would each scan the table in the same\n>>> direction and whichever creation was slower would benefit from most of\n>>> the table data it needed being prepopulated in shared buffers. Is this\n>>> the case?\n>\n>> This might be optimistic;\n>\n> No, it's not optimistic in the least, at least not since we implemented\n> synchronized seqscans (in 8.3 or thereabouts).\n>\n> regards, tom lane\n>\n\nWhere can I find details about this in the documentation?\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Wed, 17 Mar 2010 08:07:16 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "Rob Wultsch wrote:\n> On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane <[email protected]> wrote:\n> \n>> No, it's not optimistic in the least, at least not since we implemented\n>> synchronized seqscans (in 8.3 or thereabouts).\n>> \n> Where can I find details about this in the documentation?\n> \n\nIt's a behind the scenes optimization so it's not really documented on \nthe user side very well as far as I know; easy to forget it's even there \nas I did this morning. \nhttp://j-davis.com/postgresql/syncscan/syncscan.pdf is a presentation \ncovering it, and http://j-davis.com/postgresql/83v82_scans.html is also \nhelpful.\n\nWhile my pessimism on this part may have been overwrought, note the \nmessage interleaved on the list today with this discussion from Bob \nLunney discussing the other issue I brought up: \"When using 8-way \nparallel restore against a six-disk RAID 10 group I found that table and \nindex scan performance dropped by about 10x. I/O performance was \nrestored by either clustering the tables one at a time, or by dropping \nand restoring them one at a time. The only reason I can come up with \nfor this behavior is file fragmentation and increased seek times.\" Now, \nBob's situation may very well involve a heavy dose of table \nfragmentation from multiple active loading processes rather than index \nfragmentation, but this class of problem is common when trying to do too \nmany things at the same time. I'd hate to see you chase a short-term \noptimization (reduce total index built time) at the expense of long-term \noverhead (resulting indexes are not as efficient to scan).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 17 Mar 2010 14:44:56 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "On Wednesday 17 March 2010 19:44:56 Greg Smith wrote:\n> Rob Wultsch wrote:\n> > On Wed, Mar 17, 2010 at 7:30 AM, Tom Lane <[email protected]> wrote:\n> >> No, it's not optimistic in the least, at least not since we implemented\n> >> synchronized seqscans (in 8.3 or thereabouts).\n> > \n> > Where can I find details about this in the documentation?\n> \n> It's a behind the scenes optimization so it's not really documented on\n> the user side very well as far as I know; easy to forget it's even there\n> as I did this morning.\n> http://j-davis.com/postgresql/syncscan/syncscan.pdf is a presentation\n> covering it, and http://j-davis.com/postgresql/83v82_scans.html is also\n> helpful.\n> \n> While my pessimism on this part may have been overwrought, note the\n> message interleaved on the list today with this discussion from Bob\n> Lunney discussing the other issue I brought up: \"When using 8-way\n> parallel restore against a six-disk RAID 10 group I found that table and\n> index scan performance dropped by about 10x. I/O performance was\n> restored by either clustering the tables one at a time, or by dropping\n> and restoring them one at a time. The only reason I can come up with\n> for this behavior is file fragmentation and increased seek times.\" Now,\n> Bob's situation may very well involve a heavy dose of table\n> fragmentation from multiple active loading processes rather than index\n> fragmentation, but this class of problem is common when trying to do too\n> many things at the same time. I'd hate to see you chase a short-term\n> optimization (reduce total index built time) at the expense of long-term\n> overhead (resulting indexes are not as efficient to scan).\nI find it way much easier to believe such issues exist on a tables in \nconstrast to indexes. The likelihood to get sequential accesses on an index is \nsmall enough on a big table to make it unlikely to matter much.\n\nWhats your theory to make it matter much?\n\nAndres\n",
"msg_date": "Wed, 17 Mar 2010 20:24:10 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "Andres Freund escribi�:\n\n> I find it way much easier to believe such issues exist on a tables in \n> constrast to indexes. The likelihood to get sequential accesses on an index is \n> small enough on a big table to make it unlikely to matter much.\n\nVacuum walks indexes sequentially, for one.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 17 Mar 2010 17:24:01 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Andres Freund escribi�:\n>\n> \n>> I find it way much easier to believe such issues exist on a tables in \n>> constrast to indexes. The likelihood to get sequential accesses on an index is \n>> small enough on a big table to make it unlikely to matter much.\n>> \n>\n> Vacuum walks indexes sequentially, for one.\n> \n\nThat and index-based range scans were the main two use-cases I was \nconcerned would be degraded by interleaving index builds, compared with \ndoing them in succession. I work often with time-oriented apps that \nhave heavy \"give me every record between <a> and <b>\" components to \nthem, and good sequential index performance can be an important \nrequirement for that kind of application.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 17 Mar 2010 16:49:01 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:\n> Alvaro Herrera wrote:\n> > Andres Freund escribió:\n> >\n> > \n> >> I find it way much easier to believe such issues exist on a tables in \n> >> constrast to indexes. The likelihood to get sequential accesses on an index is \n> >> small enough on a big table to make it unlikely to matter much.\n> >> \n> >\n> > Vacuum walks indexes sequentially, for one.\n> > \n> \n> That and index-based range scans were the main two use-cases I was \n> concerned would be degraded by interleaving index builds, compared with \n> doing them in succession. \n\nI guess that tweaking file systems to allocate in bigger chunks help\nhere ? I know that xfs can be tuned in that regard, but how about other\ncommon file systems like ext3 ?\n\n- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Wed, 17 Mar 2010 23:18:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "It seems to me that a separate partition / tablespace would be a much simpler approach.\nOn Mar 17, 2010, at 5:18 PM, Hannu Krosing wrote:\n\n> On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:\n>> Alvaro Herrera wrote:\n>>> Andres Freund escribió:\n>>> \n>>> \n>>>> I find it way much easier to believe such issues exist on a tables in \n>>>> constrast to indexes. The likelihood to get sequential accesses on an index is \n>>>> small enough on a big table to make it unlikely to matter much.\n>>>> \n>>> \n>>> Vacuum walks indexes sequentially, for one.\n>>> \n>> \n>> That and index-based range scans were the main two use-cases I was \n>> concerned would be degraded by interleaving index builds, compared with \n>> doing them in succession. \n> \n> I guess that tweaking file systems to allocate in bigger chunks help\n> here ? I know that xfs can be tuned in that regard, but how about other\n> common file systems like ext3 ?\n> \n> - \n> Hannu Krosing http://www.2ndQuadrant.com\n> PostgreSQL Scalability and Availability \n> Services, Consulting and Training\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 18 Mar 2010 16:12:08 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "On Thu, 2010-03-18 at 16:12 -0400, Justin Pitts wrote:\n> It seems to me that a separate partition / tablespace would be a much simpler approach.\n\nDo you mean a separate partition/ tablespace for _each_ index built\nconcurrently ?\n\n> On Mar 17, 2010, at 5:18 PM, Hannu Krosing wrote:\n> \n> > On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:\n> >> Alvaro Herrera wrote:\n> >>> Andres Freund escribió:\n> >>> \n> >>> \n> >>>> I find it way much easier to believe such issues exist on a tables in \n> >>>> constrast to indexes. The likelihood to get sequential accesses on an index is \n> >>>> small enough on a big table to make it unlikely to matter much.\n> >>>> \n> >>> \n> >>> Vacuum walks indexes sequentially, for one.\n> >>> \n> >> \n> >> That and index-based range scans were the main two use-cases I was \n> >> concerned would be degraded by interleaving index builds, compared with \n> >> doing them in succession. \n> > \n> > I guess that tweaking file systems to allocate in bigger chunks help\n> > here ? I know that xfs can be tuned in that regard, but how about other\n> > common file systems like ext3 ?\n> > \n> > - \n> > Hannu Krosing http://www.2ndQuadrant.com\n> > PostgreSQL Scalability and Availability \n> > Services, Consulting and Training\n> > \n> > \n> > \n> > -- \n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Thu, 18 Mar 2010 23:20:35 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "Yes.\nOn Mar 18, 2010, at 5:20 PM, Hannu Krosing wrote:\n\n> On Thu, 2010-03-18 at 16:12 -0400, Justin Pitts wrote:\n>> It seems to me that a separate partition / tablespace would be a much simpler approach.\n> \n> Do you mean a separate partition/ tablespace for _each_ index built\n> concurrently ?\n\n",
"msg_date": "Thu, 18 Mar 2010 17:45:53 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
},
{
"msg_contents": "On Wednesday 17 March 2010 22:18:47 Hannu Krosing wrote:\n> On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:\n> > Alvaro Herrera wrote:\n> > > Andres Freund escribió:\n> > >> I find it way much easier to believe such issues exist on a tables in\n> > >> constrast to indexes. The likelihood to get sequential accesses on an\n> > >> index is small enough on a big table to make it unlikely to matter\n> > >> much.\n> > > \n> > > Vacuum walks indexes sequentially, for one.\n> > \n> > That and index-based range scans were the main two use-cases I was\n> > concerned would be degraded by interleaving index builds, compared with\n> > doing them in succession.\n> \n> I guess that tweaking file systems to allocate in bigger chunks help\n> here ? I know that xfs can be tuned in that regard, but how about other\n> common file systems like ext3 ?\next4 should do that now by allocating the space for the files only after some \ntime or uppon things like fsync (xfs does the same).\next3 has, as far as I know, neither the ability to change allocation size nor \ncan do delayed allocation.\n\nAndres\n",
"msg_date": "Fri, 19 Mar 2010 18:38:08 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes concurrently"
}
] |
[
{
"msg_contents": "I am running into a problem with a particular query. The execution plan \ncost shows that the Seq Scan is a better bet (cost=54020.49..54020.55) \nover the forced index 'enable_seqscan = false' \n(cost=1589703.87..1589703.93). But when I run the query both ways I get \na vastly different result (below). It appears not to want to bracket the \nsalesitems off of the 'id' foreign_key unless I force it.\n\nIs there a way to rewrite or hint the planner to get me the better plan \nwithout resorting to 'enable_seqscan' manipulation (or am I missing \nsomething)?\n\npostream=> select version();\n version\n-------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC \ni386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)\n\n\npostream=> SET enable_seqscan = false;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count, \nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) + \ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + \ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1589703.87..1589703.93 rows=13 width=35) (actual \ntime=33.414..33.442 rows=12 loops=1)\n -> Nested Loop (cost=0.01..1588978.22 rows=96753 width=35) (actual \ntime=0.284..22.115 rows=894 loops=1)\n -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4) \n(actual time=0.207..4.671 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on sysstrings \n(cost=0.00..5.78 rows=1 width=175) (actual time=0.073..0.078 rows=1 loops=1)\n Index Cond: (id = 'net/Console/Employee/Day End \nTime'::text)\n -> Index Scan using sales_tranzdate_index on sales s \n(cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 \nrows=225 loops=1)\n Index Cond: ((s.tranzdate >= ('2010-02-15'::date + \n(\"outer\".data)::time without time zone)) AND (s.tranzdate < \n('2010-02-16'::date + (\"outer\".data)::time without time zone)))\n Filter: ((NOT void) AND (NOT suspended))\n -> Index Scan using salesitems_pkey on salesitems si \n(cost=0.00..70.05 rows=30 width=39) (actual time=0.026..0.052 rows=4 \nloops=225)\n Index Cond: (si.id = \"outer\".id)\n Filter: ((group1_id <> ''::text) AND (group1_id IS NOT \nNULL) AND (NOT void))\n Total runtime: 33.734 ms\n(12 rows)\n\npostream=> SET enable_seqscan = true;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count, \nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) + \ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + \ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=54020.49..54020.55 rows=13 width=35) (actual \ntime=5564.929..5564.957 rows=12 loops=1)\n -> Hash Join (cost=2539.63..53294.84 rows=96753 width=35) (actual \ntime=5502.324..5556.262 rows=894 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Seq Scan on salesitems si (cost=0.00..30576.60 \nrows=885215 width=39) (actual time=0.089..3099.453 rows=901249 loops=1)\n Filter: ((group1_id <> ''::text) AND (group1_id IS NOT \nNULL) AND (NOT void))\n -> Hash (cost=2394.31..2394.31 rows=22530 width=4) (actual \ntime=3.329..3.329 rows=0 loops=1)\n -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4) \n(actual time=0.217..2.749 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on \nsysstrings (cost=0.00..5.78 rows=1 width=175) (actual time=0.077..0.085 \nrows=1 loops=1)\n Index Cond: (id = 'net/Console/Employee/Day \nEnd Time'::text)\n -> Index Scan using sales_tranzdate_index on \nsales s (cost=0.01..1825.27 rows=22530 width=12) (actual \ntime=0.074..1.945 rows=225 loops=1)\n Index Cond: ((s.tranzdate >= \n('2010-02-15'::date + (\"outer\".data)::time without time zone)) AND \n(s.tranzdate < ('2010-02-16'::date + (\"outer\".data)::time without time \nzone)))\n Filter: ((NOT void) AND (NOT suspended))\n Total runtime: 5565.262 ms\n(13 rows)\n\n\n-- \nChristian Brink\n\n\n",
"msg_date": "Wed, 17 Mar 2010 17:25:35 -0400",
"msg_from": "Christian Brink <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "On Wed, Mar 17, 2010 at 5:25 PM, Christian Brink <[email protected]>wrote:\n\n>\n> -> Index Scan using sales_tranzdate_index on sales s\n> (cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 rows=225\n> loops=1)\n>\n\nHave you tried increasing the statistics on that table (and then analyzing)?\nThe estimates for that index scan are off by a factor of 100, which may\nindicate why the planner is trying so hard to avoid a nestloop there.\n\n-- \n- David T. Wilson\[email protected]\n\nOn Wed, Mar 17, 2010 at 5:25 PM, Christian Brink <[email protected]> wrote:\n\n -> Index Scan using sales_tranzdate_index on sales s (cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 rows=225 loops=1)Have you tried increasing the statistics on that table (and then analyzing)? The estimates for that index scan are off by a factor of 100, which may indicate why the planner is trying so hard to avoid a nestloop there.\n-- - David T. [email protected]",
"msg_date": "Wed, 17 Mar 2010 17:56:55 -0400",
"msg_from": "David Wilson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "Christian Brink <[email protected]> writes:\n> Is there a way to rewrite or hint the planner to get me the better plan \n> without resorting to 'enable_seqscan' manipulation (or am I missing \n> something)?\n\nI think your problem is here:\n\n> PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC \n> i386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)\n\nRecent versions are significantly smarter about grouping operations\nthan that dinosaur.\n\n(Even if you must stay on 8.0.x, you should at least be running\n8.0.something-recent; what you have is full of security and data-loss\nrisks. 8.0.24 was released this week.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Mar 2010 18:04:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster "
},
{
"msg_contents": "I'm running 8.4.2 and have noticed a similar heavy preference for\nsequential scans and hash joins over index scans and nested loops. Our\ndatabase is can basically fit in cache 100% so this may not be\napplicable to your situation, but the following params seemed to help\nus:\n\nseq_page_cost = 1.0\nrandom_page_cost = 1.01\ncpu_tuple_cost = 0.0001\ncpu_index_tuple_cost = 0.00005\ncpu_operator_cost = 0.000025\neffective_cache_size = 1000MB\nshared_buffers = 1000MB\n\n\nMight I suggest the Postgres developers reconsider these defaults for\n9.0 release, or perhaps provide a few sets of tuning params for\ndifferent workloads in the default install/docs? The cpu_*_cost in\nparticular seem to be way off afaict. I may be dead wrong though, fwiw\n=)\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Christian\nBrink\nSent: Wednesday, March 17, 2010 2:26 PM\nTo: [email protected]\nSubject: [PERFORM] Forcing index scan on query produces 16x faster\n\nI am running into a problem with a particular query. The execution plan \ncost shows that the Seq Scan is a better bet (cost=54020.49..54020.55) \nover the forced index 'enable_seqscan = false' \n(cost=1589703.87..1589703.93). But when I run the query both ways I get \na vastly different result (below). It appears not to want to bracket the\n\nsalesitems off of the 'id' foreign_key unless I force it.\n\nIs there a way to rewrite or hint the planner to get me the better plan \nwithout resorting to 'enable_seqscan' manipulation (or am I missing \nsomething)?\n\npostream=> select version();\n version\n------------------------------------------------------------------------\n-------------------------------------------------\n PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC \ni386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)\n\n\npostream=> SET enable_seqscan = false;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count, \nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) + \ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + \ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------\n HashAggregate (cost=1589703.87..1589703.93 rows=13 width=35) (actual \ntime=33.414..33.442 rows=12 loops=1)\n -> Nested Loop (cost=0.01..1588978.22 rows=96753 width=35) (actual\n\ntime=0.284..22.115 rows=894 loops=1)\n -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4) \n(actual time=0.207..4.671 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on sysstrings \n(cost=0.00..5.78 rows=1 width=175) (actual time=0.073..0.078 rows=1\nloops=1)\n Index Cond: (id = 'net/Console/Employee/Day End \nTime'::text)\n -> Index Scan using sales_tranzdate_index on sales s \n(cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 \nrows=225 loops=1)\n Index Cond: ((s.tranzdate >= ('2010-02-15'::date +\n\n(\"outer\".data)::time without time zone)) AND (s.tranzdate < \n('2010-02-16'::date + (\"outer\".data)::time without time zone)))\n Filter: ((NOT void) AND (NOT suspended))\n -> Index Scan using salesitems_pkey on salesitems si \n(cost=0.00..70.05 rows=30 width=39) (actual time=0.026..0.052 rows=4 \nloops=225)\n Index Cond: (si.id = \"outer\".id)\n Filter: ((group1_id <> ''::text) AND (group1_id IS NOT \nNULL) AND (NOT void))\n Total runtime: 33.734 ms\n(12 rows)\n\npostream=> SET enable_seqscan = true;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count, \nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) + \ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + \ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------------\n HashAggregate (cost=54020.49..54020.55 rows=13 width=35) (actual \ntime=5564.929..5564.957 rows=12 loops=1)\n -> Hash Join (cost=2539.63..53294.84 rows=96753 width=35) (actual \ntime=5502.324..5556.262 rows=894 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Seq Scan on salesitems si (cost=0.00..30576.60 \nrows=885215 width=39) (actual time=0.089..3099.453 rows=901249 loops=1)\n Filter: ((group1_id <> ''::text) AND (group1_id IS NOT \nNULL) AND (NOT void))\n -> Hash (cost=2394.31..2394.31 rows=22530 width=4) (actual \ntime=3.329..3.329 rows=0 loops=1)\n -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4)\n\n(actual time=0.217..2.749 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on \nsysstrings (cost=0.00..5.78 rows=1 width=175) (actual time=0.077..0.085\n\nrows=1 loops=1)\n Index Cond: (id = 'net/Console/Employee/Day \nEnd Time'::text)\n -> Index Scan using sales_tranzdate_index on \nsales s (cost=0.01..1825.27 rows=22530 width=12) (actual \ntime=0.074..1.945 rows=225 loops=1)\n Index Cond: ((s.tranzdate >= \n('2010-02-15'::date + (\"outer\".data)::time without time zone)) AND \n(s.tranzdate < ('2010-02-16'::date + (\"outer\".data)::time without time \nzone)))\n Filter: ((NOT void) AND (NOT suspended))\n Total runtime: 5565.262 ms\n(13 rows)\n\n\n-- \nChristian Brink\n\n\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Mar 2010 18:01:16 -0700",
"msg_from": "\"Eger, Patrick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "I've also observed the same behaviour on a very large table (200GB data,\n170GB for 2 indexes) ....\n\nI have a table which has 6 small columns, let's call them (a, b, c, d, e, f)\nand about 1 billion rows. There is an index on (a, b, c, d) - not my idea,\nHibernate requires primary keys for every table.\n\nIf I do the following query:\n\n*select max(c) from tbl where a=[constant literal] and b=[other constant\nliteral];*\n\n.... then with maxed out analysis histograms, and no changes to any of the\npage_cost type stuff, it still deparately wants toi do a full table scan,\nwhich is ... kinda slow.\n\nOf course, a billion row table is also rather suboptimal (our app collects a\nlot more data than it used to) and so I'm bypassing Hibernate, and sharding\nit all by time, so that the tables and indexes will be a manageable size,\nand will also be vacuum-free as my aging out process is now DROP TABLE :-)\n\nCheers\nDave\n\nOn Wed, Mar 17, 2010 at 8:01 PM, Eger, Patrick <[email protected]> wrote:\n\n> I'm running 8.4.2 and have noticed a similar heavy preference for\n> sequential scans and hash joins over index scans and nested loops. Our\n> database is can basically fit in cache 100% so this may not be\n> applicable to your situation, but the following params seemed to help\n> us:\n>\n> seq_page_cost = 1.0\n> random_page_cost = 1.01\n> cpu_tuple_cost = 0.0001\n> cpu_index_tuple_cost = 0.00005\n> cpu_operator_cost = 0.000025\n> effective_cache_size = 1000MB\n> shared_buffers = 1000MB\n>\n>\n> Might I suggest the Postgres developers reconsider these defaults for\n> 9.0 release, or perhaps provide a few sets of tuning params for\n> different workloads in the default install/docs? The cpu_*_cost in\n> particular seem to be way off afaict. I may be dead wrong though, fwiw\n> =)\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Christian\n> Brink\n> Sent: Wednesday, March 17, 2010 2:26 PM\n> To: [email protected]\n> Subject: [PERFORM] Forcing index scan on query produces 16x faster\n>\n> I am running into a problem with a particular query. The execution plan\n> cost shows that the Seq Scan is a better bet (cost=54020.49..54020.55)\n> over the forced index 'enable_seqscan = false'\n> (cost=1589703.87..1589703.93). But when I run the query both ways I get\n> a vastly different result (below). It appears not to want to bracket the\n>\n> salesitems off of the 'id' foreign_key unless I force it.\n>\n> Is there a way to rewrite or hint the planner to get me the better plan\n> without resorting to 'enable_seqscan' manipulation (or am I missing\n> something)?\n>\n> postream=> select version();\n> version\n> ------------------------------------------------------------------------\n> -------------------------------------------------\n> PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC\n> i386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)\n>\n>\n> postream=> SET enable_seqscan = false;\n> SET\n> postream=> EXPLAIN ANALYZE\n> postream-> SELECT si.group1_id as name, sum(si.qty) as count,\n> sum(si.amt) as amt\n> postream-> FROM salesitems si, sales s, sysstrings\n> postream-> WHERE si.id = s.id\n> postream-> AND si.group1_id != ''\n> postream-> AND si.group1_id IS NOT NULL\n> postream-> AND NOT si.void\n> postream-> AND NOT s.void\n> postream-> AND NOT s.suspended\n> postream-> AND s.tranzdate >= (cast('2010-02-15' as date) +\n> cast(sysstrings.data as time))\n> postream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) +\n> cast(sysstrings.data as time))\n> postream-> AND sysstrings.id='net/Console/Employee/Day End Time'\n> postream-> GROUP BY name;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ----------------------------------------------------\n> HashAggregate (cost=1589703.87..1589703.93 rows=13 width=35) (actual\n> time=33.414..33.442 rows=12 loops=1)\n> -> Nested Loop (cost=0.01..1588978.22 rows=96753 width=35) (actual\n>\n> time=0.284..22.115 rows=894 loops=1)\n> -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4)\n> (actual time=0.207..4.671 rows=225 loops=1)\n> -> Index Scan using sysstrings_pkey on sysstrings\n> (cost=0.00..5.78 rows=1 width=175) (actual time=0.073..0.078 rows=1\n> loops=1)\n> Index Cond: (id = 'net/Console/Employee/Day End\n> Time'::text)\n> -> Index Scan using sales_tranzdate_index on sales s\n> (cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464\n> rows=225 loops=1)\n> Index Cond: ((s.tranzdate >= ('2010-02-15'::date +\n>\n> (\"outer\".data)::time without time zone)) AND (s.tranzdate <\n> ('2010-02-16'::date + (\"outer\".data)::time without time zone)))\n> Filter: ((NOT void) AND (NOT suspended))\n> -> Index Scan using salesitems_pkey on salesitems si\n> (cost=0.00..70.05 rows=30 width=39) (actual time=0.026..0.052 rows=4\n> loops=225)\n> Index Cond: (si.id = \"outer\".id)\n> Filter: ((group1_id <> ''::text) AND (group1_id IS NOT\n> NULL) AND (NOT void))\n> Total runtime: 33.734 ms\n> (12 rows)\n>\n> postream=> SET enable_seqscan = true;\n> SET\n> postream=> EXPLAIN ANALYZE\n> postream-> SELECT si.group1_id as name, sum(si.qty) as count,\n> sum(si.amt) as amt\n> postream-> FROM salesitems si, sales s, sysstrings\n> postream-> WHERE si.id = s.id\n> postream-> AND si.group1_id != ''\n> postream-> AND si.group1_id IS NOT NULL\n> postream-> AND NOT si.void\n> postream-> AND NOT s.void\n> postream-> AND NOT s.suspended\n> postream-> AND s.tranzdate >= (cast('2010-02-15' as date) +\n> cast(sysstrings.data as time))\n> postream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) +\n> cast(sysstrings.data as time))\n> postream-> AND sysstrings.id='net/Console/Employee/Day End Time'\n> postream-> GROUP BY name;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ----------------------------------------------------------\n> HashAggregate (cost=54020.49..54020.55 rows=13 width=35) (actual\n> time=5564.929..5564.957 rows=12 loops=1)\n> -> Hash Join (cost=2539.63..53294.84 rows=96753 width=35) (actual\n> time=5502.324..5556.262 rows=894 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".id)\n> -> Seq Scan on salesitems si (cost=0.00..30576.60\n> rows=885215 width=39) (actual time=0.089..3099.453 rows=901249 loops=1)\n> Filter: ((group1_id <> ''::text) AND (group1_id IS NOT\n> NULL) AND (NOT void))\n> -> Hash (cost=2394.31..2394.31 rows=22530 width=4) (actual\n> time=3.329..3.329 rows=0 loops=1)\n> -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4)\n>\n> (actual time=0.217..2.749 rows=225 loops=1)\n> -> Index Scan using sysstrings_pkey on\n> sysstrings (cost=0.00..5.78 rows=1 width=175) (actual time=0.077..0.085\n>\n> rows=1 loops=1)\n> Index Cond: (id = 'net/Console/Employee/Day\n> End Time'::text)\n> -> Index Scan using sales_tranzdate_index on\n> sales s (cost=0.01..1825.27 rows=22530 width=12) (actual\n> time=0.074..1.945 rows=225 loops=1)\n> Index Cond: ((s.tranzdate >=\n> ('2010-02-15'::date + (\"outer\".data)::time without time zone)) AND\n> (s.tranzdate < ('2010-02-16'::date + (\"outer\".data)::time without time\n> zone)))\n> Filter: ((NOT void) AND (NOT suspended))\n> Total runtime: 5565.262 ms\n> (13 rows)\n>\n>\n> --\n> Christian Brink\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI've also observed the same behaviour on a very large table (200GB data, 170GB for 2 indexes) .... I have a table which has 6 small columns, let's call them (a, b, c, d, e, f) and about 1 billion rows. There is an index on (a, b, c, d) - not my idea, Hibernate requires primary keys for every table.\nIf I do the following query:select max(c) from tbl where a=[constant literal] and b=[other constant literal];.... then with maxed out analysis histograms, and no changes to any of the page_cost type stuff, it still deparately wants toi do a full table scan, which is ... kinda slow. \nOf course, a billion row table is also rather suboptimal (our app collects a lot more data than it used to) and so I'm bypassing Hibernate, and sharding it all by time, so that the tables and indexes will be a manageable size, and will also be vacuum-free as my aging out process is now DROP TABLE :-)\nCheersDaveOn Wed, Mar 17, 2010 at 8:01 PM, Eger, Patrick <[email protected]> wrote:\nI'm running 8.4.2 and have noticed a similar heavy preference for\nsequential scans and hash joins over index scans and nested loops. Our\ndatabase is can basically fit in cache 100% so this may not be\napplicable to your situation, but the following params seemed to help\nus:\n\nseq_page_cost = 1.0\nrandom_page_cost = 1.01\ncpu_tuple_cost = 0.0001\ncpu_index_tuple_cost = 0.00005\ncpu_operator_cost = 0.000025\neffective_cache_size = 1000MB\nshared_buffers = 1000MB\n\n\nMight I suggest the Postgres developers reconsider these defaults for\n9.0 release, or perhaps provide a few sets of tuning params for\ndifferent workloads in the default install/docs? The cpu_*_cost in\nparticular seem to be way off afaict. I may be dead wrong though, fwiw\n=)\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Christian\nBrink\nSent: Wednesday, March 17, 2010 2:26 PM\nTo: [email protected]\nSubject: [PERFORM] Forcing index scan on query produces 16x faster\n\nI am running into a problem with a particular query. The execution plan\ncost shows that the Seq Scan is a better bet (cost=54020.49..54020.55)\nover the forced index 'enable_seqscan = false'\n(cost=1589703.87..1589703.93). But when I run the query both ways I get\na vastly different result (below). It appears not to want to bracket the\n\nsalesitems off of the 'id' foreign_key unless I force it.\n\nIs there a way to rewrite or hint the planner to get me the better plan\nwithout resorting to 'enable_seqscan' manipulation (or am I missing\nsomething)?\n\npostream=> select version();\n version\n------------------------------------------------------------------------\n-------------------------------------------------\n PostgreSQL 8.0.3 on i386-redhat-linux-gnu, compiled by GCC\ni386-redhat-linux-gcc (GCC) 4.0.0 20050505 (Red Hat 4.0.0-4)\n\n\npostream=> SET enable_seqscan = false;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count,\nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) +\ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) +\ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n\nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------\n HashAggregate (cost=1589703.87..1589703.93 rows=13 width=35) (actual\ntime=33.414..33.442 rows=12 loops=1)\n -> Nested Loop (cost=0.01..1588978.22 rows=96753 width=35) (actual\n\ntime=0.284..22.115 rows=894 loops=1)\n -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4)\n(actual time=0.207..4.671 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on sysstrings\n(cost=0.00..5.78 rows=1 width=175) (actual time=0.073..0.078 rows=1\nloops=1)\n Index Cond: (id = 'net/Console/Employee/Day End\nTime'::text)\n -> Index Scan using sales_tranzdate_index on sales s\n(cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464\nrows=225 loops=1)\n Index Cond: ((s.tranzdate >= ('2010-02-15'::date +\n\n(\"outer\".data)::time without time zone)) AND (s.tranzdate <\n('2010-02-16'::date + (\"outer\".data)::time without time zone)))\n Filter: ((NOT void) AND (NOT suspended))\n -> Index Scan using salesitems_pkey on salesitems si\n(cost=0.00..70.05 rows=30 width=39) (actual time=0.026..0.052 rows=4\nloops=225)\n Index Cond: (si.id = \"outer\".id)\n Filter: ((group1_id <> ''::text) AND (group1_id IS NOT\nNULL) AND (NOT void))\n Total runtime: 33.734 ms\n(12 rows)\n\npostream=> SET enable_seqscan = true;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count,\nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) +\ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) +\ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n\nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------------\n HashAggregate (cost=54020.49..54020.55 rows=13 width=35) (actual\ntime=5564.929..5564.957 rows=12 loops=1)\n -> Hash Join (cost=2539.63..53294.84 rows=96753 width=35) (actual\ntime=5502.324..5556.262 rows=894 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Seq Scan on salesitems si (cost=0.00..30576.60\nrows=885215 width=39) (actual time=0.089..3099.453 rows=901249 loops=1)\n Filter: ((group1_id <> ''::text) AND (group1_id IS NOT\nNULL) AND (NOT void))\n -> Hash (cost=2394.31..2394.31 rows=22530 width=4) (actual\ntime=3.329..3.329 rows=0 loops=1)\n -> Nested Loop (cost=0.01..2394.31 rows=22530 width=4)\n\n(actual time=0.217..2.749 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on\nsysstrings (cost=0.00..5.78 rows=1 width=175) (actual time=0.077..0.085\n\nrows=1 loops=1)\n Index Cond: (id = 'net/Console/Employee/Day\nEnd Time'::text)\n -> Index Scan using sales_tranzdate_index on\nsales s (cost=0.01..1825.27 rows=22530 width=12) (actual\ntime=0.074..1.945 rows=225 loops=1)\n Index Cond: ((s.tranzdate >=\n('2010-02-15'::date + (\"outer\".data)::time without time zone)) AND\n(s.tranzdate < ('2010-02-16'::date + (\"outer\".data)::time without time\nzone)))\n Filter: ((NOT void) AND (NOT suspended))\n Total runtime: 5565.262 ms\n(13 rows)\n\n\n--\nChristian Brink\n\n\n\n--\nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 18 Mar 2010 19:08:49 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "On Wed, Mar 17, 2010 at 9:01 PM, Eger, Patrick <[email protected]> wrote:\n> I'm running 8.4.2 and have noticed a similar heavy preference for\n> sequential scans and hash joins over index scans and nested loops. Our\n> database is can basically fit in cache 100% so this may not be\n> applicable to your situation, but the following params seemed to help\n> us:\n>\n> seq_page_cost = 1.0\n> random_page_cost = 1.01\n> cpu_tuple_cost = 0.0001\n> cpu_index_tuple_cost = 0.00005\n> cpu_operator_cost = 0.000025\n> effective_cache_size = 1000MB\n> shared_buffers = 1000MB\n>\n>\n> Might I suggest the Postgres developers reconsider these defaults for\n> 9.0 release, or perhaps provide a few sets of tuning params for\n> different workloads in the default install/docs? The cpu_*_cost in\n> particular seem to be way off afaict. I may be dead wrong though, fwiw\n> =)\n\nThe default assume that the database is not cached in RAM. If it is,\nyou want to lower seq_page_cost and random_page_cost to something much\nsmaller, and typically make them equal. I often recommend 0.005, but\nI know others have had success with higher values.\n\nUltimately it would be nice to have a better model of how data gets\ncached in shared_buffers and the OS buffer cache, but that is not so\neasy.\n\n...Robert\n",
"msg_date": "Wed, 24 Mar 2010 20:46:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "Ok, the wording is a bit unclear in the documentation as to whether it is the cost for an entire *page* of tuples, or actual tuples. So something like the following might give better results for a fully-cached DB?\n\nseq_page_cost = 1.0\nrandom_page_cost = 1.1 #even memory has random access costs, lack of readahead, TLB misses, etc\ncpu_tuple_cost = 1.0\ncpu_index_tuple_cost = 0.5\ncpu_operator_cost = 0.25\neffective_cache_size = 1000MB\nshared_buffers = 1000MB\n\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: Wednesday, March 24, 2010 5:47 PM\nTo: Eger, Patrick\nCc: Christian Brink; [email protected]\nSubject: Re: [PERFORM] Forcing index scan on query produces 16x faster\n\nOn Wed, Mar 17, 2010 at 9:01 PM, Eger, Patrick <[email protected]> wrote:\n> I'm running 8.4.2 and have noticed a similar heavy preference for\n> sequential scans and hash joins over index scans and nested loops. Our\n> database is can basically fit in cache 100% so this may not be\n> applicable to your situation, but the following params seemed to help\n> us:\n>\n> seq_page_cost = 1.0\n> random_page_cost = 1.01\n> cpu_tuple_cost = 0.0001\n> cpu_index_tuple_cost = 0.00005\n> cpu_operator_cost = 0.000025\n> effective_cache_size = 1000MB\n> shared_buffers = 1000MB\n>\n>\n> Might I suggest the Postgres developers reconsider these defaults for\n> 9.0 release, or perhaps provide a few sets of tuning params for\n> different workloads in the default install/docs? The cpu_*_cost in\n> particular seem to be way off afaict. I may be dead wrong though, fwiw\n> =)\n\nThe default assume that the database is not cached in RAM. If it is,\nyou want to lower seq_page_cost and random_page_cost to something much\nsmaller, and typically make them equal. I often recommend 0.005, but\nI know others have had success with higher values.\n\nUltimately it would be nice to have a better model of how data gets\ncached in shared_buffers and the OS buffer cache, but that is not so\neasy.\n\n...Robert\n",
"msg_date": "Wed, 24 Mar 2010 17:59:33 -0700",
"msg_from": "\"Eger, Patrick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "On Wed, Mar 24, 2010 at 8:59 PM, Eger, Patrick <[email protected]> wrote:\n> Ok, the wording is a bit unclear in the documentation as to whether it is the cost for an entire *page* of tuples, or actual tuples. So something like the following might give better results for a fully-cached DB?\n>\n> seq_page_cost = 1.0\n> random_page_cost = 1.1 #even memory has random access costs, lack of readahead, TLB misses, etc\n> cpu_tuple_cost = 1.0\n> cpu_index_tuple_cost = 0.5\n> cpu_operator_cost = 0.25\n> effective_cache_size = 1000MB\n> shared_buffers = 1000MB\n\nYeah, you can do it that way, by jacking up the cpu_tuple costs. I\nprefer to lower the {random/seq}_page_cost values because it keeps the\ncost values in the range I'm used to seeing, but it works out to the\nsame thing.\n\nI am not sure that there is any benefit from making random_page_cost >\nseq_page_cost on a fully cached database. What does readahead mean in\nthe context of cached data? The data isn't likely physically\ncontiguous in RAM, and I'm not sure it would matter much if it were.\nBasically, what random_page_cost > seq_page_cost tends to do is\ndiscourage the use of index scans in borderline cases, so you want to\nbenchmark it and figure out which way is faster and then tune\naccordingly.\n\n...Robert\n",
"msg_date": "Wed, 24 Mar 2010 22:16:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
},
{
"msg_contents": "2010/3/25 Robert Haas <[email protected]>:\n> On Wed, Mar 17, 2010 at 9:01 PM, Eger, Patrick <[email protected]> wrote:\n>> I'm running 8.4.2 and have noticed a similar heavy preference for\n>> sequential scans and hash joins over index scans and nested loops. Our\n>> database is can basically fit in cache 100% so this may not be\n>> applicable to your situation, but the following params seemed to help\n>> us:\n>>\n>> seq_page_cost = 1.0\n>> random_page_cost = 1.01\n>> cpu_tuple_cost = 0.0001\n>> cpu_index_tuple_cost = 0.00005\n>> cpu_operator_cost = 0.000025\n>> effective_cache_size = 1000MB\n>> shared_buffers = 1000MB\n>>\n>>\n>> Might I suggest the Postgres developers reconsider these defaults for\n>> 9.0 release, or perhaps provide a few sets of tuning params for\n>> different workloads in the default install/docs? The cpu_*_cost in\n>> particular seem to be way off afaict. I may be dead wrong though, fwiw\n>> =)\n>\n> The default assume that the database is not cached in RAM. If it is,\n> you want to lower seq_page_cost and random_page_cost to something much\n> smaller, and typically make them equal. I often recommend 0.005, but\n> I know others have had success with higher values.\n>\n> Ultimately it would be nice to have a better model of how data gets\n> cached in shared_buffers and the OS buffer cache, but that is not so\n> easy.\n\nI have some work on this point with pgfincore project. Getting the\ninformation from the OS is actualy a bit slow but possible. I try to\nfind time to finish my patch in order to get the info in the\npg_statio_* views :)\n\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain\n",
"msg_date": "Sun, 28 Mar 2010 23:04:24 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index scan on query produces 16x faster"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm running quite a large social community website (250k users, 16gb \ndatabase). We are currently preparing a complete relaunch and thinking \nabout switching from mysql 5.1.37 innodb to postgresql 8.4.2. The \ndatabase server is a dual dualcore operton 2216 with 12gb ram running on \ndebian amd64.\n\nFor a first impression I ran a simple query on our users table (snapshot \nwith only ~ 45.000 records). The table has an index on birthday_age \n[integer]. The test executes 10 times the same query and simply discards \nthe results. I ran the tests using a php and a ruby script, the results \nare almost the same.\n\nUnluckily mysql seems to be around 3x as fast as postgresql for this \nsimple query. There's no swapping, disc reading involved...everything is \nin ram.\n\nquery\nselect * from users where birthday_age between 12 and 13 or birthday_age \nbetween 20 and 22 limit 1000\n\nmysql\n{\"select_type\"=>\"SIMPLE\", \"key_len\"=>\"1\", \"id\"=>\"1\", \"table\"=>\"users\", \n\"type\"=>\"range\", \"possible_keys\"=>\"birthday_age\", \"rows\"=>\"7572\", \n\"Extra\"=>\"Using where\", \"ref\"=>nil, \"key\"=>\"birthday_age\"}\n15.104055404663\n14.209032058716\n18.857002258301\n15.714883804321\n14.73593711853\n15.048027038574\n14.589071273804\n14.847040176392\n15.192985534668\n15.115976333618\n\npostgresql\n{\"QUERY PLAN\"=>\"Limit (cost=125.97..899.11 rows=1000 width=448) (actual \ntime=0.927..4.990 rows=1000 loops=1)\"}\n{\"QUERY PLAN\"=>\" -> Bitmap Heap Scan on users (cost=125.97..3118.00 \nrows=3870 width=448) (actual time=0.925..3.420 rows=1000 loops=1)\"}\n{\"QUERY PLAN\"=>\" Recheck Cond: (((birthday_age >= 12) AND (birthday_age \n<= 13)) OR ((birthday_age >= 20) AND (birthday_age <= 22)))\"}\n{\"QUERY PLAN\"=>\" -> BitmapOr (cost=125.97..125.97 rows=3952 width=0) \n(actual time=0.634..0.634 rows=0 loops=1)\"}\n{\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..41.67 \nrows=1341 width=0) (actual time=0.260..0.260 rows=1327 loops=1)\"}\n{\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 12) AND (birthday_age <= \n13))\"}\n{\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..82.37 \nrows=2611 width=0) (actual time=0.370..0.370 rows=2628 loops=1)\"}\n{\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 20) AND (birthday_age <= \n22))\"}\n{\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n44.173002243042\n41.156768798828\n39.988040924072\n40.470123291016\n40.035963058472\n40.077924728394\n40.94386100769\n40.183067321777\n39.83211517334\n40.256977081299\n\nI also wonder why the reported runtime of 5.847 ms is so much different \nto the runtime reported of my scripts (both php and ruby are almost the \nsame). What's the best tool to time queries in postgresql? Can this be \ndone from pgadmin?\n\nThanks,\nCorin\n\n",
"msg_date": "Thu, 18 Mar 2010 15:31:18 +0100",
"msg_from": "Corin <[email protected]>",
"msg_from_op": true,
"msg_subject": "mysql to postgresql, performance questions"
},
{
"msg_contents": "I guess we need some more details about the test. Is the\nconnection/disconnection part of each test iteration? And how are the\ndatabases connected (using a socked / localhost / different host)?\n\nAnyway measuring such simple queries will tell you almost nothing about\nthe general app performance - use the queries that are used in the\napplication.\n\n> I also wonder why the reported runtime of 5.847 ms is so much different\n> to the runtime reported of my scripts (both php and ruby are almost the\n> same). What's the best tool to time queries in postgresql? Can this be\n> done from pgadmin?\n\nI doubt there's a 'best tool' to time queries, but I'd vote for logging\nfrom the application itself, as it measures the performance from the end\nuser view-point (and that's what you're interested in). Just put some\nsimple logging into the database access layer.\n\nregards\nTomas\n\n",
"msg_date": "Thu, 18 Mar 2010 15:50:53 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "If you expect this DB to be memory resident, you should update\nthe cpu/disk cost parameters in postgresql.conf. There was a\npost earlier today with some more reasonable starting values.\nCertainly your test DB will be memory resident.\n\nKen\n\nOn Thu, Mar 18, 2010 at 03:31:18PM +0100, Corin wrote:\n> Hi all,\n>\n> I'm running quite a large social community website (250k users, 16gb \n> database). We are currently preparing a complete relaunch and thinking \n> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database \n> server is a dual dualcore operton 2216 with 12gb ram running on debian \n> amd64.\n>\n> For a first impression I ran a simple query on our users table (snapshot \n> with only ~ 45.000 records). The table has an index on birthday_age \n> [integer]. The test executes 10 times the same query and simply discards \n> the results. I ran the tests using a php and a ruby script, the results are \n> almost the same.\n>\n> Unluckily mysql seems to be around 3x as fast as postgresql for this simple \n> query. There's no swapping, disc reading involved...everything is in ram.\n>\n> query\n> select * from users where birthday_age between 12 and 13 or birthday_age \n> between 20 and 22 limit 1000\n>\n> mysql\n> {\"select_type\"=>\"SIMPLE\", \"key_len\"=>\"1\", \"id\"=>\"1\", \"table\"=>\"users\", \n> \"type\"=>\"range\", \"possible_keys\"=>\"birthday_age\", \"rows\"=>\"7572\", \n> \"Extra\"=>\"Using where\", \"ref\"=>nil, \"key\"=>\"birthday_age\"}\n> 15.104055404663\n> 14.209032058716\n> 18.857002258301\n> 15.714883804321\n> 14.73593711853\n> 15.048027038574\n> 14.589071273804\n> 14.847040176392\n> 15.192985534668\n> 15.115976333618\n>\n> postgresql\n> {\"QUERY PLAN\"=>\"Limit (cost=125.97..899.11 rows=1000 width=448) (actual \n> time=0.927..4.990 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Heap Scan on users (cost=125.97..3118.00 \n> rows=3870 width=448) (actual time=0.925..3.420 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Recheck Cond: (((birthday_age >= 12) AND (birthday_age <= \n> 13)) OR ((birthday_age >= 20) AND (birthday_age <= 22)))\"}\n> {\"QUERY PLAN\"=>\" -> BitmapOr (cost=125.97..125.97 rows=3952 width=0) \n> (actual time=0.634..0.634 rows=0 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..41.67 \n> rows=1341 width=0) (actual time=0.260..0.260 rows=1327 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 12) AND (birthday_age <= \n> 13))\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..82.37 \n> rows=2611 width=0) (actual time=0.370..0.370 rows=2628 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 20) AND (birthday_age <= \n> 22))\"}\n> {\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n> 44.173002243042\n> 41.156768798828\n> 39.988040924072\n> 40.470123291016\n> 40.035963058472\n> 40.077924728394\n> 40.94386100769\n> 40.183067321777\n> 39.83211517334\n> 40.256977081299\n>\n> I also wonder why the reported runtime of 5.847 ms is so much different to \n> the runtime reported of my scripts (both php and ruby are almost the same). \n> What's the best tool to time queries in postgresql? Can this be done from \n> pgadmin?\n>\n> Thanks,\n> Corin\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Thu, 18 Mar 2010 10:00:17 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On 18 March 2010 14:31, Corin <[email protected]> wrote:\n\n> Hi all,\n>\n> I'm running quite a large social community website (250k users, 16gb\n> database). We are currently preparing a complete relaunch and thinking about\n> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server\n> is a dual dualcore operton 2216 with 12gb ram running on debian amd64.\n>\n> For a first impression I ran a simple query on our users table (snapshot\n> with only ~ 45.000 records). The table has an index on birthday_age\n> [integer]. The test executes 10 times the same query and simply discards the\n> results. I ran the tests using a php and a ruby script, the results are\n> almost the same.\n>\n> Unluckily mysql seems to be around 3x as fast as postgresql for this simple\n> query. There's no swapping, disc reading involved...everything is in ram.\n>\n> query\n> select * from users where birthday_age between 12 and 13 or birthday_age\n> between 20 and 22 limit 1000\n>\n> mysql\n> {\"select_type\"=>\"SIMPLE\", \"key_len\"=>\"1\", \"id\"=>\"1\", \"table\"=>\"users\",\n> \"type\"=>\"range\", \"possible_keys\"=>\"birthday_age\", \"rows\"=>\"7572\",\n> \"Extra\"=>\"Using where\", \"ref\"=>nil, \"key\"=>\"birthday_age\"}\n> 15.104055404663\n> 14.209032058716\n> 18.857002258301\n> 15.714883804321\n> 14.73593711853\n> 15.048027038574\n> 14.589071273804\n> 14.847040176392\n> 15.192985534668\n> 15.115976333618\n>\n> postgresql\n> {\"QUERY PLAN\"=>\"Limit (cost=125.97..899.11 rows=1000 width=448) (actual\n> time=0.927..4.990 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Heap Scan on users (cost=125.97..3118.00\n> rows=3870 width=448) (actual time=0.925..3.420 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Recheck Cond: (((birthday_age >= 12) AND (birthday_age <=\n> 13)) OR ((birthday_age >= 20) AND (birthday_age <= 22)))\"}\n> {\"QUERY PLAN\"=>\" -> BitmapOr (cost=125.97..125.97 rows=3952 width=0)\n> (actual time=0.634..0.634 rows=0 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..41.67\n> rows=1341 width=0) (actual time=0.260..0.260 rows=1327 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 12) AND (birthday_age <=\n> 13))\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..82.37\n> rows=2611 width=0) (actual time=0.370..0.370 rows=2628 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 20) AND (birthday_age <=\n> 22))\"}\n> {\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n> 44.173002243042\n> 41.156768798828\n> 39.988040924072\n> 40.470123291016\n> 40.035963058472\n> 40.077924728394\n> 40.94386100769\n> 40.183067321777\n> 39.83211517334\n> 40.256977081299\n>\n> I also wonder why the reported runtime of 5.847 ms is so much different to\n> the runtime reported of my scripts (both php and ruby are almost the same).\n> What's the best tool to time queries in postgresql? Can this be done from\n> pgadmin?\n>\n>\npgAdmin will return the query time in the status bar of a query window.\nSimilarly, you can use psql and activate query times by using \"\\timing\".\n\nRegards\n\nThom\n\nOn 18 March 2010 14:31, Corin <[email protected]> wrote:\n\nHi all,\n\nI'm running quite a large social community website (250k users, 16gb database). We are currently preparing a complete relaunch and thinking about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server is a dual dualcore operton 2216 with 12gb ram running on debian amd64.\n\nFor a first impression I ran a simple query on our users table (snapshot with only ~ 45.000 records). The table has an index on birthday_age [integer]. The test executes 10 times the same query and simply discards the results. I ran the tests using a php and a ruby script, the results are almost the same.\n\nUnluckily mysql seems to be around 3x as fast as postgresql for this simple query. There's no swapping, disc reading involved...everything is in ram.\n\nquery\nselect * from users where birthday_age between 12 and 13 or birthday_age between 20 and 22 limit 1000\n\nmysql\n{\"select_type\"=>\"SIMPLE\", \"key_len\"=>\"1\", \"id\"=>\"1\", \"table\"=>\"users\", \"type\"=>\"range\", \"possible_keys\"=>\"birthday_age\", \"rows\"=>\"7572\", \"Extra\"=>\"Using where\", \"ref\"=>nil, \"key\"=>\"birthday_age\"}\n\n\n15.104055404663\n14.209032058716\n18.857002258301\n15.714883804321\n14.73593711853\n15.048027038574\n14.589071273804\n14.847040176392\n15.192985534668\n15.115976333618\n\npostgresql\n{\"QUERY PLAN\"=>\"Limit (cost=125.97..899.11 rows=1000 width=448) (actual time=0.927..4.990 rows=1000 loops=1)\"}\n{\"QUERY PLAN\"=>\" -> Bitmap Heap Scan on users (cost=125.97..3118.00 rows=3870 width=448) (actual time=0.925..3.420 rows=1000 loops=1)\"}\n{\"QUERY PLAN\"=>\" Recheck Cond: (((birthday_age >= 12) AND (birthday_age <= 13)) OR ((birthday_age >= 20) AND (birthday_age <= 22)))\"}\n{\"QUERY PLAN\"=>\" -> BitmapOr (cost=125.97..125.97 rows=3952 width=0) (actual time=0.634..0.634 rows=0 loops=1)\"}\n{\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..41.67 rows=1341 width=0) (actual time=0.260..0.260 rows=1327 loops=1)\"}\n{\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 12) AND (birthday_age <= 13))\"}\n{\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..82.37 rows=2611 width=0) (actual time=0.370..0.370 rows=2628 loops=1)\"}\n{\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 20) AND (birthday_age <= 22))\"}\n{\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n44.173002243042\n41.156768798828\n39.988040924072\n40.470123291016\n40.035963058472\n40.077924728394\n40.94386100769\n40.183067321777\n39.83211517334\n40.256977081299\n\nI also wonder why the reported runtime of 5.847 ms is so much different to the runtime reported of my scripts (both php and ruby are almost the same). What's the best tool to time queries in postgresql? Can this be done from pgadmin?\npgAdmin will return the query time in the status bar of a query window. Similarly, you can use psql and activate query times by using \"\\timing\".RegardsThom",
"msg_date": "Thu, 18 Mar 2010 15:05:36 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "time that psql or pgAdmin shows is purely the postgresql time.\nQuestion here was about the actual application's time. Sometimes the data\ntransmission, fetch and processing on the app's side can take longer than\nthe 'postgresql' time.\n\ntime that psql or pgAdmin shows is purely the postgresql time. Question here was about the actual application's time. Sometimes the data transmission, fetch and processing on the app's side can take longer than the 'postgresql' time.",
"msg_date": "Thu, 18 Mar 2010 15:08:44 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Corin,\n\n* Corin ([email protected]) wrote:\n> I'm running quite a large social community website (250k users, 16gb \n> database). We are currently preparing a complete relaunch and thinking \n> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The \n> database server is a dual dualcore operton 2216 with 12gb ram running on \n> debian amd64.\n\nCan you provide at least your postgresql.conf? That could be useful,\nthough this does seem like a really simple query.\n\n> For a first impression I ran a simple query on our users table (snapshot \n> with only ~ 45.000 records). The table has an index on birthday_age \n> [integer]. The test executes 10 times the same query and simply discards \n> the results. I ran the tests using a php and a ruby script, the results \n> are almost the same.\n\nI wouldn't expect it to matter a whole lot, but have you considered\nusing prepared queries?\n\n> Unluckily mysql seems to be around 3x as fast as postgresql for this \n> simple query. There's no swapping, disc reading involved...everything is \n> in ram.\n>\n> query\n> select * from users where birthday_age between 12 and 13 or birthday_age \n> between 20 and 22 limit 1000\n\nDo you use every column from users, and do you really want 1000 records\nback?\n\n> {\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n\nThis runtime is the amount of time it took for the backend to run the\nquery.\n\n> 44.173002243042\n\nThese times are including all the time required to get the data back to\nthe client. If you don't use cursors, all data from the query is\nreturned all at once. Can you post the script you're using along with\nthe table schema and maybe some sample or example data? Also, are you\ndoing this all inside a single transaction, or are you creating a new\ntransaction for every query? I trust you're not reconnecting to the\ndatabase for every query..\n\n> I also wonder why the reported runtime of 5.847 ms is so much different \n> to the runtime reported of my scripts (both php and ruby are almost the \n> same). What's the best tool to time queries in postgresql? Can this be \n> done from pgadmin?\n\nAs was mentioned elsewhere, certainly the best tool to test with is your\nactual application, if that's possible.. Or at least the language your\napplication is in.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 18 Mar 2010 11:09:00 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Thu, Mar 18, 2010 at 16:09, Stephen Frost <[email protected]> wrote:\n> Corin,\n>\n> * Corin ([email protected]) wrote:\n>> {\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n>\n> This runtime is the amount of time it took for the backend to run the\n> query.\n>\n>> 44.173002243042\n>\n> These times are including all the time required to get the data back to\n> the client. If you don't use cursors, all data from the query is\n> returned all at once. Can you post the script you're using along with\n> the table schema and maybe some sample or example data? Also, are you\n> doing this all inside a single transaction, or are you creating a new\n> transaction for every query? I trust you're not reconnecting to the\n> database for every query..\n\nJust as a note here, since the OP is using Debian. If you are\nconnecting over TCP, debian will by default to SSL on your connection\nwhich obviously adds a *lot* of overhead. If you're not actively using\nit (in which case you will control this from pg_hba.conf), just edit\npostgresql.conf and disable SSL, then restart the server.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Thu, 18 Mar 2010 16:23:09 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Thu, Mar 18, 2010 at 8:31 AM, Corin <[email protected]> wrote:\n> Hi all,\n>\n> I'm running quite a large social community website (250k users, 16gb\n> database). We are currently preparing a complete relaunch and thinking about\n> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server\n> is a dual dualcore operton 2216 with 12gb ram running on debian amd64.\n>\n> For a first impression I ran a simple query on our users table (snapshot\n> with only ~ 45.000 records). The table has an index on birthday_age\n> [integer]. The test executes 10 times the same query and simply discards the\n> results. I ran the tests using a php and a ruby script, the results are\n> almost the same.\n>\n> Unluckily mysql seems to be around 3x as fast as postgresql for this simple\n> query. There's no swapping, disc reading involved...everything is in ram.\n>\n> query\n> select * from users where birthday_age between 12 and 13 or birthday_age\n> between 20 and 22 limit 1000\n>\n> mysql\n> {\"select_type\"=>\"SIMPLE\", \"key_len\"=>\"1\", \"id\"=>\"1\", \"table\"=>\"users\",\n> \"type\"=>\"range\", \"possible_keys\"=>\"birthday_age\", \"rows\"=>\"7572\",\n> \"Extra\"=>\"Using where\", \"ref\"=>nil, \"key\"=>\"birthday_age\"}\n> 15.104055404663\n> 14.209032058716\n> 18.857002258301\n> 15.714883804321\n> 14.73593711853\n> 15.048027038574\n> 14.589071273804\n> 14.847040176392\n> 15.192985534668\n> 15.115976333618\n>\n> postgresql\n> {\"QUERY PLAN\"=>\"Limit (cost=125.97..899.11 rows=1000 width=448) (actual\n> time=0.927..4.990 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Heap Scan on users (cost=125.97..3118.00\n> rows=3870 width=448) (actual time=0.925..3.420 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Recheck Cond: (((birthday_age >= 12) AND (birthday_age <=\n> 13)) OR ((birthday_age >= 20) AND (birthday_age <= 22)))\"}\n> {\"QUERY PLAN\"=>\" -> BitmapOr (cost=125.97..125.97 rows=3952 width=0) (actual\n> time=0.634..0.634 rows=0 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..41.67\n> rows=1341 width=0) (actual time=0.260..0.260 rows=1327 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 12) AND (birthday_age <=\n> 13))\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..82.37\n> rows=2611 width=0) (actual time=0.370..0.370 rows=2628 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 20) AND (birthday_age <=\n> 22))\"}\n> {\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n> 44.173002243042\n> 41.156768798828\n> 39.988040924072\n> 40.470123291016\n> 40.035963058472\n> 40.077924728394\n> 40.94386100769\n> 40.183067321777\n> 39.83211517334\n> 40.256977081299\n>\n> I also wonder why the reported runtime of 5.847 ms is so much different to\n> the runtime reported of my scripts (both php and ruby are almost the same).\n> What's the best tool to time queries in postgresql? Can this be done from\n> pgadmin?\n\nIt's different because it only takes pgsql 5 milliseconds to run the\nquery, and 40 seconds to transfer the data across to your applicaiton,\nwhich THEN promptly throws it away. If you run it as\n\nMySQL's client lib doesn't transfer over the whole thing. This is\nmore about how each db interface is implemented in those languages.\n",
"msg_date": "Thu, 18 Mar 2010 09:50:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On 18-3-2010 16:50 Scott Marlowe wrote:\n> It's different because it only takes pgsql 5 milliseconds to run the\n> query, and 40 seconds to transfer the data across to your applicaiton,\n> which THEN promptly throws it away. If you run it as\n>\n> MySQL's client lib doesn't transfer over the whole thing. This is\n> more about how each db interface is implemented in those languages.\n\nIts the default behavior of both PostgreSQL and MySQL to transfer the \nwhole resultset over to the client. Or is that different for Ruby's \nMySQL-driver? At least in PHP the behavior is similar for both.\nAnd I certainly do hope its 40ms rather than 40s, otherwise it would be \na really bad performing network in either case (15s for mysql) or very \nlarge records (which I doubt).\n\nI'm wondering if a new connection is made between each query. PostgreSQL \nis (afaik still is but I haven't compared that recently) a bit slower on \nthat department than MySQL.\n\nBest regards,\n\nArjen\n",
"msg_date": "Thu, 18 Mar 2010 18:08:34 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Corin wrote:\n> Hi all,\n> \n> I'm running quite a large social community website (250k users, 16gb \n> database). We are currently preparing a complete relaunch and thinking \n> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The \n\n\"relaunch\" looks like you are nearing the end (the \"launch\") of the \nproject - if so, you should know that switching databases near the \nproject deadline is almost always a suicidal act. Even if the big \ndifferences are easily fixable, the small differences will kill you.\n\n> database server is a dual dualcore operton 2216 with 12gb ram running on \n> debian amd64.\n> \n> For a first impression I ran a simple query on our users table (snapshot \n> with only ~ 45.000 records). The table has an index on birthday_age \n> [integer]. The test executes 10 times the same query and simply discards \n> the results. I ran the tests using a php and a ruby script, the results \n> are almost the same.\n\nYour table will probably fit in RAM but the whole database obviously \nwon't. Not that it matters here.\n\nDid you configure anything at all in postgresql.conf? The defaults \nassume a very small database.\n\n> Unluckily mysql seems to be around 3x as fast as postgresql for this \n> simple query. There's no swapping, disc reading involved...everything is \n> in ram.\n\nIt depends...\n\n> 15.115976333618\n\nSo this is 15 ms?\n\n> postgresql\n> {\"QUERY PLAN\"=>\"Limit (cost=125.97..899.11 rows=1000 width=448) (actual \n> time=0.927..4.990 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Heap Scan on users (cost=125.97..3118.00 \n> rows=3870 width=448) (actual time=0.925..3.420 rows=1000 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Recheck Cond: (((birthday_age >= 12) AND (birthday_age \n> <= 13)) OR ((birthday_age >= 20) AND (birthday_age <= 22)))\"}\n> {\"QUERY PLAN\"=>\" -> BitmapOr (cost=125.97..125.97 rows=3952 width=0) \n> (actual time=0.634..0.634 rows=0 loops=1)\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..41.67 \n> rows=1341 width=0) (actual time=0.260..0.260 rows=1327 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 12) AND (birthday_age <= \n> 13))\"}\n> {\"QUERY PLAN\"=>\" -> Bitmap Index Scan on birthday_age (cost=0.00..82.37 \n> rows=2611 width=0) (actual time=0.370..0.370 rows=2628 loops=1)\"}\n> {\"QUERY PLAN\"=>\" Index Cond: ((birthday_age >= 20) AND (birthday_age <= \n> 22))\"}\n> {\"QUERY PLAN\"=>\"Total runtime: 5.847 ms\"}\n> 44.173002243042\n\n> I also wonder why the reported runtime of 5.847 ms is so much different \n> to the runtime reported of my scripts (both php and ruby are almost the \n\nIt looks like you are spending ~~38 ms in delivering the data to your \napplication. Whatever you are using, stop using it :)\n\n> same). What's the best tool to time queries in postgresql? Can this be \n> done from pgadmin?\n\nThe only rational way is to measure at the database itself and not \ninclude other factors like the network, scripting language libraries, \netc. To do this, login at your db server with a shell and use psql. \nStart it as \"psql databasename username\" and issue a statement like \n\"EXPLAIN ANALYZE SELECT ...your_query...\". Unless magic happens, this \nwill open a local unix socket connection to the database for the query, \nwhich has the least overhead.\n\nYou can of course also do this for MySQL though I don't know if it has \nan equivalent of \"EXPLAIN ANALYZE\".\n\nBut even after you have found where the problem is, and even if you see \nthat Pg is faster than MySQL, you will still need realistic loads to \ntest the real-life performance difference.\n\n",
"msg_date": "Fri, 19 Mar 2010 01:32:11 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Corin <[email protected]> writes:\n> I'm running quite a large social community website (250k users, 16gb\n> database). We are currently preparing a complete relaunch and thinking about\n> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server\n> is a dual dualcore operton 2216 with 12gb ram running on debian amd64.\n>\n> For a first impression I ran a simple query on our users table (snapshot\n\nFor more serious impression and realistic figures, you could use tsung\natop the http side of your application and compare how it performs given\na certain load of concurrent users.\n\nIn your situation I'd expect to win a lot going to PostgreSQL on\nconcurrency scaling. Tsung is made to test that.\n\n http://tsung.erlang-projects.org/\n\nRegards,\n-- \ndim\n",
"msg_date": "Fri, 19 Mar 2010 10:04:00 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Fri, Mar 19, 2010 at 3:04 AM, Dimitri Fontaine\n<[email protected]> wrote:\n> Corin <[email protected]> writes:\n>> I'm running quite a large social community website (250k users, 16gb\n>> database). We are currently preparing a complete relaunch and thinking about\n>> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server\n>> is a dual dualcore operton 2216 with 12gb ram running on debian amd64.\n>>\n>> For a first impression I ran a simple query on our users table (snapshot\n>\n> For more serious impression and realistic figures, you could use tsung\n> atop the http side of your application and compare how it performs given\n> a certain load of concurrent users.\n>\n> In your situation I'd expect to win a lot going to PostgreSQL on\n> concurrency scaling. Tsung is made to test that.\n\nExactly. The OP's original benchmark is a single query run by a\nsingle thread. A realistic benchmark would use increasing numbers of\nclients in parallel to see how each db scales under load. A single\nquery by a single thread is pretty uninteresting and unrealistic\n",
"msg_date": "Fri, 19 Mar 2010 07:51:42 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "\n> I also wonder why the reported runtime of 5.847 ms is so much different \n> to the runtime reported of my scripts (both php and ruby are almost the \n> same). What's the best tool to time queries in postgresql? Can this be \n> done from pgadmin?\n\nI've seen differences like that. Benchmarking isn't easy. The client \nlibraries, the particular language bindings you use, the connection... all \nthat can add overhead that is actually mych larger that what you're trying \nto measure.\n\n- On \"localhost\", some MySQL distros will default to a UNIX Socket, some \nPostgres distros will default to a TCP socket, or even SSL, and vice versa.\n\nNeedless to say, on a small query like \"SELECT * FROM users WHERE \nuser_id=$1\", this makes a lot of difference, since the query time (just a \nfew tens of microseconds) is actually shorter than the TCP overhead. \nDepending on how you connect you can get a 2-3x variation in throughput \nwith client and server on the same machine, just between TCP and UNIX \nsocket.\n\nOn queries that retrieve lots of data, overheads are also quite different \n(especially with SSL...)\n\n- I've seen postgres saturate a 1 GB/s ethernet link between server and \nclient during benchmark.\n\n- Performance depends a LOT on your language bindings. For instance :\n\nphp : PDO is quite a lot slower than pg_query() especially if you use \nprepared statements which are used only once,\npython : psycopg, pygresql, mysql-python behave quite differently (psycopg \nbeing by far the fastest of the bunch), especially when retrieving lots of \nresults, and converting those results back to python types...\n\nSo, what are you benchmarking exactly ?...\n",
"msg_date": "Fri, 19 Mar 2010 18:34:53 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Thu, Mar 18, 2010 at 10:31 AM, Corin <[email protected]> wrote:\n> I'm running quite a large social community website (250k users, 16gb\n> database). We are currently preparing a complete relaunch and thinking about\n> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server\n> is a dual dualcore operton 2216 with 12gb ram running on debian amd64.\n>\n> For a first impression I ran a simple query on our users table (snapshot\n> with only ~ 45.000 records). The table has an index on birthday_age\n> [integer]. The test executes 10 times the same query and simply discards the\n> results. I ran the tests using a php and a ruby script, the results are\n> almost the same.\n>\n> Unluckily mysql seems to be around 3x as fast as postgresql for this simple\n> query. There's no swapping, disc reading involved...everything is in ram.\n>\n> query\n> select * from users where birthday_age between 12 and 13 or birthday_age\n> between 20 and 22 limit 1000\n\ncouple of points:\n\\timing switch in psql is the best way to get timing results that are\nroughly similar to what your application will get, minus the overhead\nof your application.\n\nyour issue is likely coming from one of three places:\n1) connection/ssl/client library issue: maybe you are using ssl in\npostgres but not mysql, or some other factor which is outside the\ndatabase\n2) not apples to apples: postgres schema is missing an index, or\nsomething similar.\n3) mysql generated a better plan: mysql has a simpler query\nplanner/statistics model that can occasionally generate a better plan\nor (if you are using myisam) mysql can do tricks which are impractical\nor impossible in the mvcc transactional system postgres uses.\n\nso, you have to figure out which of those three things you are looking\nat, and then fix it if the query is performance critical.\n\nmerlin\n",
"msg_date": "Fri, 19 Mar 2010 14:20:48 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On 03/18/2010 09:31 AM, Corin wrote:\n> Hi all,\n>\n> I'm running quite a large social community website (250k users, 16gb\n> database). We are currently preparing a complete relaunch and thinking\n> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The\n> database server is a dual dualcore operton 2216 with 12gb ram running on\n> debian amd64.\n>\n> For a first impression I ran a simple query on our users table (snapshot\n> with only ~ 45.000 records). The table has an index on birthday_age\n> [integer]. The test executes 10 times the same query and simply discards\n> the results. I ran the tests using a php and a ruby script, the results\n> are almost the same.\n>\n\nDon't underestimate mysql. It was written to be fast. But you have to understand the underling points: It was written to be fast at the cost of other things... like concurrent access, and data integrity. If you want to just read from a database, PG probably cant beat mysql. But heres the thing, your site does not just read. Nor does it fire off the same sql 10 times. So not a good test.\n\nMysql does not have strong concurrent read/write support. Wait.. let me not bash mysql, let me praise PG:\n\nPG has strong concurrent read/write support, it has very strong data integrity, and strict sql syntax. (By strict sql syntax I mean you cant write invalid/imprecise sql and have it just return results to you, which later you realize the question you asked was complete jibberish so what the heck did mysql return? I know this from experience when I converted a website from mysql to pg, and pg did not like my syntax. After looking into it more I realized the sql I was writing was asking an insane question... I could not imagine how mysql knew what I was asking for, or even what it would be returning to me.)\n\n\nMysql was built on one underlying principle: there is nothing more important than speed.\n\nI do not believe in that principle, that's why I don't choose mysql. If you bench your website at 150 pages a second on mysql, and 100 pages a second on PG, the only question is, do I really need more than 100 pages a second?\n\nEven if PG is not as fast or faster, but in the same ballpark, the end user will never notice the difference, but you still gain all the other benefits of PG.\n\nIt could also be that the database is not your bottleneck. At some point, on mysql or PG, your website wont be fast enough. It might be the database, but might not. You'll have to go through the same thing regardless of which db you are on (fixing sql, organizing the db, adding webservers, adding db servers, caching stuff, etc...).\n\nI guess, for me, once I started using PG and learned enough about it (all db have their own quirks and dark corners) I was in love. It wasnt important which db was fastest at xyz, it was which tool do I know, and trust, that can solve problem xyz.\n\n(I added the \"and trust\" as an after thought, because I do have one very important 100% uptime required mysql database that is running. Its my MythTV box at home, and I have to ask permission from my GF before I take the box down to upgrade anything. And heaven forbid if it crashes or anything. So I do have experience with care and feeding of mysql. And no, I'm not kidding.)\n\nAnd I choose PG.\n\n\n-Andy\n",
"msg_date": "Sat, 20 Mar 2010 22:47:30 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Sat, Mar 20, 2010 at 11:47 PM, Andy Colson <[email protected]> wrote:\n> Don't underestimate mysql. It was written to be fast. But you have to\n> understand the underling points: It was written to be fast at the cost of\n> other things... like concurrent access, and data integrity. If you want to\n> just read from a database, PG probably cant beat mysql. But heres the\n> thing, your site does not just read. Nor does it fire off the same sql 10\n> times. So not a good test.\n\nfor non trivial selects (myisam has no transaction overhead so can\nusually edge out pg in row by row ops), and without taking multi user\nissues into account, it's often going to come down to who generates a\nbetter plan. postgres has more plan options and a better statistics\nmodel and can usually beat mysql on many types of selects.\n\nupdates w/myisam are where mysql really shines in single user apps.\nthe reason is obvious: no mvcc means the heap can often be updated in\nplace.\n\nmerlin\n",
"msg_date": "Sun, 21 Mar 2010 18:43:06 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Note however that Oracle offeres full transactionality and does in place row\nupdates. There is more than one way to do it.\n\nCheers\nDave\n\nOn Mar 21, 2010 5:43 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\nOn Sat, Mar 20, 2010 at 11:47 PM, Andy Colson <[email protected]> wrote:\n> Don't underestimate my...\nfor non trivial selects (myisam has no transaction overhead so can\nusually edge out pg in row by row ops), and without taking multi user\nissues into account, it's often going to come down to who generates a\nbetter plan. postgres has more plan options and a better statistics\nmodel and can usually beat mysql on many types of selects.\n\nupdates w/myisam are where mysql really shines in single user apps.\nthe reason is obvious: no mvcc means the heap can often be updated in\nplace.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to y...\n\nNote however that Oracle offeres full transactionality and does in place row updates. There is more than one way to do it.\nCheers\nDave\nOn Mar 21, 2010 5:43 PM, \"Merlin Moncure\" <[email protected]> wrote:On Sat, Mar 20, 2010 at 11:47 PM, Andy Colson <[email protected]> wrote:\n> Don't underestimate my...for non trivial selects (myisam has no transaction overhead so can\nusually edge out pg in row by row ops), and without taking multi user\nissues into account, it's often going to come down to who generates a\nbetter plan. postgres has more plan options and a better statistics\nmodel and can usually beat mysql on many types of selects.\n\nupdates w/myisam are where mysql really shines in single user apps.\nthe reason is obvious: no mvcc means the heap can often be updated in\nplace.\n\nmerlin\n-- Sent via pgsql-performance mailing list ([email protected])To make changes to y...",
"msg_date": "Sun, 21 Mar 2010 20:14:19 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Sun, Mar 21, 2010 at 9:14 PM, Dave Crooke <[email protected]> wrote:\n> Note however that Oracle offeres full transactionality and does in place row\n> updates. There is more than one way to do it.\n\nThere's no free lunch. If you do mvcc you have to maintain multiple\nversions of the same row.\n\nmerlin\n",
"msg_date": "Mon, 22 Mar 2010 07:14:51 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Mon, 22 Mar 2010 12:14:51 +0100, Merlin Moncure <[email protected]> \nwrote:\n\n> On Sun, Mar 21, 2010 at 9:14 PM, Dave Crooke <[email protected]> wrote:\n>> Note however that Oracle offeres full transactionality and does in \n>> place row\n>> updates. There is more than one way to do it.\n>\n> There's no free lunch.\n\nMVCC : VACUUM\nOracle : Rollback Segments\nMyISAM : no concurrency/transactions\n\nIt's all about which compromise suits you ;)\n",
"msg_date": "Mon, 22 Mar 2010 12:15:51 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Absolutely ...\n\n- for fixed size rows with a lot of small updates, Oracle wins. BTW, as of\nOracle 9 they're called \"UNDO tablesapces\"\n- for lots of transactions and feely mixing transactions of all sizes, MVCC\ntables (Postgres) wins\n- if you just want a structured filesystem and don't have integrity\nrequirements or a lot of updates, MyISAM wins\n\nFor our app, Oracle would be the best, but it isn't strictly necessary so\nPostgres wins on price ;-)\n\nCheers\nDave\n\nOn Mon, Mar 22, 2010 at 6:15 AM, Pierre C <[email protected]> wrote:\n\n> On Mon, 22 Mar 2010 12:14:51 +0100, Merlin Moncure <[email protected]>\n> wrote:\n>\n> On Sun, Mar 21, 2010 at 9:14 PM, Dave Crooke <[email protected]> wrote:\n>>\n>>> Note however that Oracle offeres full transactionality and does in place\n>>> row\n>>> updates. There is more than one way to do it.\n>>>\n>>\n>> There's no free lunch.\n>>\n>\n> MVCC : VACUUM\n> Oracle : Rollback Segments\n> MyISAM : no concurrency/transactions\n>\n> It's all about which compromise suits you ;)\n>\n\nAbsolutely ... - for fixed size rows with a lot of small updates, Oracle wins. BTW, as of Oracle 9 they're called \"UNDO tablesapces\"- for lots of transactions and feely mixing transactions of all sizes, MVCC tables (Postgres) wins\n- if you just want a structured filesystem and don't have integrity requirements or a lot of updates, MyISAM winsFor our app, Oracle would be the best, but it isn't strictly necessary so Postgres wins on price ;-)\nCheersDaveOn Mon, Mar 22, 2010 at 6:15 AM, Pierre C <[email protected]> wrote:\nOn Mon, 22 Mar 2010 12:14:51 +0100, Merlin Moncure <[email protected]> wrote:\n\n\nOn Sun, Mar 21, 2010 at 9:14 PM, Dave Crooke <[email protected]> wrote:\n\nNote however that Oracle offeres full transactionality and does in place row\nupdates. There is more than one way to do it.\n\n\nThere's no free lunch.\n\n\nMVCC : VACUUM\nOracle : Rollback Segments\nMyISAM : no concurrency/transactions\n\nIt's all about which compromise suits you ;)",
"msg_date": "Mon, 22 Mar 2010 10:32:01 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:\n> \n> I guess, for me, once I started using PG and learned enough about it (all \n> db have their own quirks and dark corners) I was in love. It wasnt \n> important which db was fastest at xyz, it was which tool do I know, and \n> trust, that can solve problem xyz.\n> \n> (I added the \"and trust\" as an after thought, because I do have one very \n> important 100% uptime required mysql database that is running. Its my \n> MythTV box at home, and I have to ask permission from my GF before I take \n> the box down to upgrade anything. And heaven forbid if it crashes or \n> anything. So I do have experience with care and feeding of mysql. And no, \n> I'm not kidding.)\n> \n> And I choose PG.\n> \n\nAndy, you are so me! I have the exact same one-and-only-one mission\ncritical mysql DB, but the gatekeeper is my wife. And experience with\nthat instance has made me love and trust PostgreSQL even more.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n",
"msg_date": "Tue, 23 Mar 2010 13:52:32 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:\n>> (I added the \"and trust\" as an after thought, because I do have one very \n>> important 100% uptime required mysql database that is running. Its my \n>> MythTV box at home, and I have to ask permission from my GF before I take \n>> the box down to upgrade anything. And heaven forbid if it crashes or \n>> anything. So I do have experience with care and feeding of mysql. And no, \n>> I'm not kidding.)\n\n> Andy, you are so me! I have the exact same one-and-only-one mission\n> critical mysql DB, but the gatekeeper is my wife. And experience with\n> that instance has made me love and trust PostgreSQL even more.\n\nSo has anyone looked at porting MythTV to PG?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Mar 2010 15:22:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions "
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 1:22 PM, Tom Lane <[email protected]> wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n>> On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:\n>>> (I added the \"and trust\" as an after thought, because I do have one very\n>>> important 100% uptime required mysql database that is running. Its my\n>>> MythTV box at home, and I have to ask permission from my GF before I take\n>>> the box down to upgrade anything. And heaven forbid if it crashes or\n>>> anything. So I do have experience with care and feeding of mysql. And no,\n>>> I'm not kidding.)\n>\n>> Andy, you are so me! I have the exact same one-and-only-one mission\n>> critical mysql DB, but the gatekeeper is my wife. And experience with\n>> that instance has made me love and trust PostgreSQL even more.\n>\n> So has anyone looked at porting MythTV to PG?\n\nOr SQLite. I'm guessing that most loads on it are single threaded.\n",
"msg_date": "Tue, 23 Mar 2010 14:10:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Tom Lane wrote:\n> So has anyone looked at porting MythTV to PG?\n> \n\nPeriodically someone hacks together something that works, last big \neffort I'm aware of was in 2006, and then it bit rots away. I'm sure \nwe'd get some user uptake on the result--MySQL corruption is one of the \ntop ten cause of a MythTV system crashing. The developers are so \nresistant to database-neutral design that you'd need quite the thick \nskin to try and get something into their mainline though, which means \nsomeone who tried adding PostgreSQL support would likely have to run a \nparallel branch for some time, expecting regular breakage. The only \nthing on their radar as far as I know is SQLite.\n\nThere was a good overview circa 2004 at \nhttp://david.hardeman.nu/files/patches/mythtv/mythletter.txt , haven't \ndone a deep dive into the code recently enough to comment on exactly \nwhat has changed since then. That gives a flavor for the fundamentals \nof the design issues though.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 23 Mar 2010 17:38:01 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 03:22:01PM -0400, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> \n> > Andy, you are so me! I have the exact same one-and-only-one mission\n> > critical mysql DB, but the gatekeeper is my wife. And experience with\n> > that instance has made me love and trust PostgreSQL even more.\n> \n> So has anyone looked at porting MythTV to PG?\n> \nMy understanding from perusing mailing list archives is that there have\nbeen multiple attempts to provide a database neutral layer and support\ndifferent backend databases (mostly w/ PG as the driver) but the lead\ndeveloper has been something between disintrested and actively hostile\nto the idea. I think this page http://www.mythtv.org/wiki/PostgreSQL_Support \nsay it all:\n deleted \"PostgreSQL Support\" (Outdated, messy and unsupported)\n\nAnd the Wayback machine version:\n\nhttp://web.archive.org/web/20080521003224/http://mythtv.org/wiki/index.php/PostgreSQL_Support\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n",
"msg_date": "Tue, 23 Mar 2010 16:39:42 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "What about InnoDB?\n\nOn Tue, Mar 23, 2010 at 4:38 PM, Greg Smith <[email protected]> wrote:\n\n> Tom Lane wrote:\n>\n>> So has anyone looked at porting MythTV to PG?\n>>\n>>\n>\n> Periodically someone hacks together something that works, last big effort\n> I'm aware of was in 2006, and then it bit rots away. I'm sure we'd get some\n> user uptake on the result--MySQL corruption is one of the top ten cause of a\n> MythTV system crashing. The developers are so resistant to database-neutral\n> design that you'd need quite the thick skin to try and get something into\n> their mainline though, which means someone who tried adding PostgreSQL\n> support would likely have to run a parallel branch for some time, expecting\n> regular breakage. The only thing on their radar as far as I know is SQLite.\n>\n> There was a good overview circa 2004 at\n> http://david.hardeman.nu/files/patches/mythtv/mythletter.txt , haven't\n> done a deep dive into the code recently enough to comment on exactly what\n> has changed since then. That gives a flavor for the fundamentals of the\n> design issues though.\n>\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWhat about InnoDB?On Tue, Mar 23, 2010 at 4:38 PM, Greg Smith <[email protected]> wrote:\nTom Lane wrote:\n\nSo has anyone looked at porting MythTV to PG?\n \n\n\nPeriodically someone hacks together something that works, last big effort I'm aware of was in 2006, and then it bit rots away. I'm sure we'd get some user uptake on the result--MySQL corruption is one of the top ten cause of a MythTV system crashing. The developers are so resistant to database-neutral design that you'd need quite the thick skin to try and get something into their mainline though, which means someone who tried adding PostgreSQL support would likely have to run a parallel branch for some time, expecting regular breakage. The only thing on their radar as far as I know is SQLite.\n\nThere was a good overview circa 2004 at http://david.hardeman.nu/files/patches/mythtv/mythletter.txt , haven't done a deep dive into the code recently enough to comment on exactly what has changed since then. That gives a flavor for the fundamentals of the design issues though.\n\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 23 Mar 2010 18:07:07 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 5:07 PM, Dave Crooke <[email protected]> wrote:\n> What about InnoDB?\n\nDepends on what parts of mysql they otherwise use. There are plenty\nof features that won't work if you're using non-myisam tables, like\nfull text search. I tend to think any full blown (or nearly so) db is\noverkill for mythtv, and the use of something like sqllite or berkely\ndb tables is a better fit.\n",
"msg_date": "Tue, 23 Mar 2010 17:30:48 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Tue, Mar 23, 2010 at 5:07 PM, Dave Crooke <[email protected]> wrote:\n>> What about InnoDB?\n\n> Depends on what parts of mysql they otherwise use. There are plenty\n> of features that won't work if you're using non-myisam tables, like\n> full text search. I tend to think any full blown (or nearly so) db is\n> overkill for mythtv, and the use of something like sqllite or berkely\n> db tables is a better fit.\n\nThat's apparently also the position of their lead developer; although\nconsidering he's not actually done anything about it for six or more\nyears, it seems like quite a lame excuse for blocking ports to other\nDBs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 23 Mar 2010 19:35:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions "
},
{
"msg_contents": "MyISAM is SQLLite with some threading ;-)\n\nOn Tue, Mar 23, 2010 at 6:30 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Tue, Mar 23, 2010 at 5:07 PM, Dave Crooke <[email protected]> wrote:\n> > What about InnoDB?\n>\n> Depends on what parts of mysql they otherwise use. There are plenty\n> of features that won't work if you're using non-myisam tables, like\n> full text search. I tend to think any full blown (or nearly so) db is\n> overkill for mythtv, and the use of something like sqllite or berkely\n> db tables is a better fit.\n>\n\nMyISAM is SQLLite with some threading ;-)On Tue, Mar 23, 2010 at 6:30 PM, Scott Marlowe <[email protected]> wrote:\nOn Tue, Mar 23, 2010 at 5:07 PM, Dave Crooke <[email protected]> wrote:\n\n> What about InnoDB?\n\nDepends on what parts of mysql they otherwise use. There are plenty\nof features that won't work if you're using non-myisam tables, like\nfull text search. I tend to think any full blown (or nearly so) db is\noverkill for mythtv, and the use of something like sqllite or berkely\ndb tables is a better fit.",
"msg_date": "Tue, 23 Mar 2010 18:37:08 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 5:35 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> On Tue, Mar 23, 2010 at 5:07 PM, Dave Crooke <[email protected]> wrote:\n>>> What about InnoDB?\n>\n>> Depends on what parts of mysql they otherwise use. There are plenty\n>> of features that won't work if you're using non-myisam tables, like\n>> full text search. I tend to think any full blown (or nearly so) db is\n>> overkill for mythtv, and the use of something like sqllite or berkely\n>> db tables is a better fit.\n>\n> That's apparently also the position of their lead developer; although\n> considering he's not actually done anything about it for six or more\n> years, it seems like quite a lame excuse for blocking ports to other\n> DBs.\n\nMethinks he's big on his \"comfort-zone\".\n",
"msg_date": "Tue, 23 Mar 2010 18:01:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Greg Smith wrote:\n> Tom Lane wrote:\n>> So has anyone looked at porting MythTV to PG?\n>> \n>\n> Periodically someone hacks together something that works, last big \n> effort I'm aware of was in 2006, and then it bit rots away. I'm sure \n> we'd get some user uptake on the result--MySQL corruption is one of \n> the top ten cause of a MythTV system crashing.\nIt would be the same with PG, unless the pg cluster configuration with \nmythtv would come with a properly configured WAL - I had corrupted \ntables (and a personal wiki entry (the other mysql database in my \nhouse) *only* when I sometimes took the risk of not shutting down the \nmachine properly when e.g. the remote was missing).\n\nregards,\nYeb Havinga\n",
"msg_date": "Wed, 24 Mar 2010 09:55:55 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Yeb Havinga wrote:\n> Greg Smith wrote:\n>> Tom Lane wrote:\n>>> So has anyone looked at porting MythTV to PG?\n>>> \n>>\n>> Periodically someone hacks together something that works, last big \n>> effort I'm aware of was in 2006, and then it bit rots away. I'm sure \n>> we'd get some user uptake on the result--MySQL corruption is one of \n>> the top ten cause of a MythTV system crashing.\n> It would be the same with PG, unless the pg cluster configuration with \n> mythtv would come with a properly configured WAL - I had corrupted \n> tables (and a personal wiki entry \nforgot to add \"how to fix the corrupted tables\", sorry\n> (the other mysql database in my house) *only* when I sometimes took \n> the risk of not shutting down the machine properly when e.g. the \n> remote was missing).\n>\n> regards,\n> Yeb Havinga\n\n",
"msg_date": "Wed, 24 Mar 2010 09:57:15 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Wed, 2010-03-24 at 09:55 +0100, Yeb Havinga wrote:\n> Greg Smith wrote:\n> > Tom Lane wrote:\n> >> So has anyone looked at porting MythTV to PG?\n> >> \n> >\n> > Periodically someone hacks together something that works, last big \n> > effort I'm aware of was in 2006, and then it bit rots away. I'm sure \n> > we'd get some user uptake on the result--MySQL corruption is one of \n> > the top ten cause of a MythTV system crashing.\n> It would be the same with PG, unless the pg cluster configuration with \n> mythtv would come with a properly configured WAL - I had corrupted \n> tables (and a personal wiki entry (the other mysql database in my \n> house) *only* when I sometimes took the risk of not shutting down the \n> machine properly when e.g. the remote was missing).\n\nPulling the plug should not corrupt a postgreSQL database, unless it was\nusing disks which lie about write caching.\n\nNow need for WAL replica for that\n\n> regards,\n> Yeb Havinga\n> \n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Wed, 24 Mar 2010 10:59:58 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Yeb Havinga wrote:\n> Greg Smith wrote:\n>> MySQL corruption is one of the top ten cause of a MythTV system \n>> crashing.\n> It would be the same with PG, unless the pg cluster configuration with \n> mythtv would come with a properly configured WAL - I had corrupted \n> tables (and a personal wiki entry (the other mysql database in my \n> house) *only* when I sometimes took the risk of not shutting down the \n> machine properly when e.g. the remote was missing).\n\nYou can shutdown a PostgreSQL database improperly and it will come back \nup again just fine unless a number of things have happened at just the \nwrong time:\n\n1) You've written something to disk\n2) The write is sitting in in a write cache, usually on the hard drive, \nbut the OS believes the data has been written\n3) There is a hard crash before that data is actually written to disk\n\nNow, this certainly still happens with PostgreSQL; was just discussing \nthat yesterday with a client who runs an app on desktop hardware in \ncountries with intermittant power, and database corruption is a problem \nfor them. However, that's a fairly heavy write volume situation, which \nis not the case with most MythTV servers. The actual window where the \nWAL will not do what it's supposed to here is pretty narrow; it's easy \nto trigger if you pull the plug when writing constantly, but that's not \na typical MythTV database load.\n\nAlso, moving forward, we'll see the default filesystem on more Linux \nsystems shift to ext4, and it's starting to lose even this \nvulnerability--newer kernels will flush the data out to disk in this \nsituation using the appropriate drive command.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 24 Mar 2010 08:09:50 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "[email protected] (Tom Lane) writes:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n>> On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:\n>>> (I added the \"and trust\" as an after thought, because I do have one very \n>>> important 100% uptime required mysql database that is running. Its my \n>>> MythTV box at home, and I have to ask permission from my GF before I take \n>>> the box down to upgrade anything. And heaven forbid if it crashes or \n>>> anything. So I do have experience with care and feeding of mysql. And no, \n>>> I'm not kidding.)\n>\n>> Andy, you are so me! I have the exact same one-and-only-one mission\n>> critical mysql DB, but the gatekeeper is my wife. And experience with\n>> that instance has made me love and trust PostgreSQL even more.\n>\n> So has anyone looked at porting MythTV to PG?\n\nIt has come up several times on the MythTV list.\n\nhttp://david.hardeman.nu/files/patches/mythtv/mythletter.txt\nhttp://www.mythtv.org/pipermail/mythtv-dev/2004-August/025385.html\nhttp://www.mythtv.org/pipermail/mythtv-users/2006-July/141191.html\n\nProbably worth asking David H�rdeman and Danny Brow who have proposed\nsuch to the MythTV community what happened. (It's possible that they\nwill get cc'ed on this.)\n\nIf there's a meaningful way to help, that would be cool. If not, then\nwe might as well not run slipshot across the same landmines that blew\nthe idea up before.\n-- \n\"Transported to a surreal landscape, a young girl kills the first\nwoman she meets and then teams up with three complete strangers to\nkill again.\" -- Unknown, Marin County newspaper's TV listing for _The\nWizard of Oz_\n",
"msg_date": "Wed, 24 Mar 2010 11:35:11 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "[email protected] (\"Ross J. Reedstrom\") writes:\n> http://www.mythtv.org/wiki/PostgreSQL_Support \n\nThat's a pretty hostile presentation...\n\nThe page has had two states:\n\n a) In 2008, someone wrote up...\n\n After some bad experiences with MySQL (data loss by commercial power\n failure, very bad performance deleting old records and more) I would\n prefer to have a MythTV Application option to use PostgreSQL. I\n never saw such bad database behaviour at any other RDBMS than MySQL.\n\n I'm ready to contribute at any activity going that direction (I'm\n developer for commercial database applications).\n\n b) Deleted by GBee in 2009, indicating \"(Outdated, messy and\n unsupported)\"\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://linuxfinances.info/info/spreadsheets.html\n\"A language that doesn't affect the way you think about programming,\nis not worth knowing.\" -- Alan J. Perlis\n",
"msg_date": "Wed, 24 Mar 2010 11:41:05 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Hannu Krosing wrote:\n> Pulling the plug should not corrupt a postgreSQL database, unless it was\n> using disks which lie about write caching.\n> \nDidn't we recently put the old wife's 'the disks lied' tale to bed in \nfavour of actually admiting that some well known filesystems and \nsaftware raid systems have had trouble with their write barriers?\n\n",
"msg_date": "Thu, 25 Mar 2010 20:04:48 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Thu, Mar 25, 2010 at 2:04 PM, James Mansion\n<[email protected]> wrote:\n> Hannu Krosing wrote:\n>>\n>> Pulling the plug should not corrupt a postgreSQL database, unless it was\n>> using disks which lie about write caching.\n>>\n>\n> Didn't we recently put the old wife's 'the disks lied' tale to bed in favour\n> of actually admiting that some well known filesystems and saftware raid\n> systems have had trouble with their write barriers?\n\nI believe so. It was determined to be a combination of several\nculprits, and only a few hard drives from back in the day apparently\never had this problem.\n\nOf course now it seems that modern SSDs may lie about cache if they\ndon't have a big enough capacitor to guarantee they can write out\ntheir internal cache etc.\n\nThe sad fact remains that many desktop / workstation systems lie, and\nquite a few servers as well, for whatever reason.\n",
"msg_date": "Thu, 25 Mar 2010 14:24:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "> Hannu Krosing wrote:\n>> Pulling the plug should not corrupt a postgreSQL database, unless it was\n>> using disks which lie about write caching.\n>>\n> Didn't we recently put the old wife's 'the disks lied' tale to bed in \n> favour of actually admiting that some well known filesystems and \n> saftware raid systems have had trouble with their write barriers?\n\nI put a cheap UPS on the home server (which uses Software RAID) precisely \nbecause I don't really trust that stuff, and there is also the RAID5 write \nhole... and maybe the RAID1 write hole too... and installing a UPS takes \nless time that actually figuring out if the system is power-loss-safe.\n",
"msg_date": "Thu, 25 Mar 2010 21:29:44 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Thu, Mar 25, 2010 at 2:29 PM, Pierre C <[email protected]> wrote:\n>> Hannu Krosing wrote:\n>>>\n>>> Pulling the plug should not corrupt a postgreSQL database, unless it was\n>>> using disks which lie about write caching.\n>>>\n>> Didn't we recently put the old wife's 'the disks lied' tale to bed in\n>> favour of actually admiting that some well known filesystems and saftware\n>> raid systems have had trouble with their write barriers?\n>\n> I put a cheap UPS on the home server (which uses Software RAID) precisely\n> because I don't really trust that stuff, and there is also the RAID5 write\n> hole... and maybe the RAID1 write hole too... and installing a UPS takes\n> less time that actually figuring out if the system is power-loss-safe.\n\nVery true, a UPS might not cover every possible failure mode, but it\nsure takes care of an aweful lot of the common ones.\n",
"msg_date": "Thu, 25 Mar 2010 14:38:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Thu, Mar 25, 2010 at 2:29 PM, Pierre C <[email protected]> wrote:\n> \n>>> Hannu Krosing wrote:\n>>> \n>>>> Pulling the plug should not corrupt a postgreSQL database, unless it was\n>>>> using disks which lie about write caching.\n>>>>\n>>>> \n>>> Didn't we recently put the old wife's 'the disks lied' tale to bed in\n>>> favour of actually admiting that some well known filesystems and saftware\n>>> raid systems have had trouble with their write barriers?\n>>> \n>> I put a cheap UPS on the home server (which uses Software RAID) precisely\n>> because I don't really trust that stuff, and there is also the RAID5 write\n>> hole... and maybe the RAID1 write hole too... and installing a UPS takes\n>> less time that actually figuring out if the system is power-loss-safe.\n>> \n>\n> Very true, a UPS might not cover every possible failure mode, but it\n> sure takes care of an aweful lot of the common ones.\n> \nYeah, but the original post was about mythtv boxes, which usually do not \nhave upses. My suggestion about proper setup of the wal was based on \nsome experience of my own. What I did was probably the fastest path to \ncorrupt database files: diskless mythtv box that booted from the \nfileserver at the attic (with ups btw), but I was too lazy (after x days \nof lirc / xorg / ivtv / rtc / xmltv etc work) to move the default \nconfigured mysql database from the mythtv box (with root filesystem and \nalso mysql on the nfs mount) to a mysql running on the fileserver \nitself. On top of that I had nfs mounted async for speed. Really after x \ndays of configuration to get things running (my wife thinks it's hobby \ntime but it really isn't) all that is on your mind is: it works good \nenough? fine, will iron out non essential things when they pop up and if \nthe db becomes corrupt, I had database backups. In the end I had a few \ntimes a corrupt table that was always easily repaired with the \nmysqlcheck tool.\n\nBased on this experience I do not think that reliability alone will \nconvince mythtv developers/users to switch to postgresql, and besides \nthat as a developer and user myself, it's always in a way funny to see \nhow creative people can finding ways to not properly use (your) software ;-)\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Thu, 25 Mar 2010 22:22:08 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "James Mansion wrote:\n> Hannu Krosing wrote:\n> > Pulling the plug should not corrupt a postgreSQL database, unless it was\n> > using disks which lie about write caching.\n> > \n> Didn't we recently put the old wife's 'the disks lied' tale to bed in \n> favour of actually admiting that some well known filesystems and \n> saftware raid systems have had trouble with their write barriers?\n\nI thought the issue was that many file systems do not issue the drive\nATAPI flush command, and I suppose drives are allowed not to flush on\nwrite if they honor the command.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n",
"msg_date": "Wed, 31 Mar 2010 10:23:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On 31 March 2010 15:23, Bruce Momjian <[email protected]> wrote:\n> James Mansion wrote:\n>> Hannu Krosing wrote:\n>> > Pulling the plug should not corrupt a postgreSQL database, unless it was\n>> > using disks which lie about write caching.\n>> >\n>> Didn't we recently put the old wife's 'the disks lied' tale to bed in\n>> favour of actually admiting that some well known filesystems and\n>> saftware raid systems have had trouble with their write barriers?\n>\n> I thought the issue was that many file systems do not issue the drive\n> ATAPI flush command, and I suppose drives are allowed not to flush on\n> write if they honor the command.\n>\n> --\n\nI thought I'd attempt to renew discussion of adding PostgreSQL support\nto MythTV, but here's the response:\n\n> It is not being actively developed to my knowledge and we have\n> no intention of _ever_ committing such patches. Any work you do\n> *will* be wasted.\n>\n> It is far more likely that we'll move to embedded mysql to ease\n> the administration overhead for users.\n\nIt's a surprisingly hostile response.\n\nThom\n",
"msg_date": "Mon, 21 Jun 2010 19:02:11 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
},
{
"msg_contents": "On Mon, Jun 21, 2010 at 12:02 PM, Thom Brown <[email protected]> wrote:\n> I thought I'd attempt to renew discussion of adding PostgreSQL support\n> to MythTV, but here's the response:\n>\n>> It is not being actively developed to my knowledge and we have\n>> no intention of _ever_ committing such patches. Any work you do\n>> *will* be wasted.\n>>\n>> It is far more likely that we'll move to embedded mysql to ease\n>> the administration overhead for users.\n>\n> It's a surprisingly hostile response.\n\nNot for MythTV it's not. Their code if chock full of mysqlisms and\ntheir dev folks are mostly not interested in any \"advanced\" features\nof postgresql, like the tendency to NOT corrupt its data store every\nfew months.\n",
"msg_date": "Mon, 21 Jun 2010 12:08:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: mysql to postgresql, performance questions"
}
] |
[
{
"msg_contents": "Hi all!\n\nWhile evaluting the pgsql query planer I found some weird behavior of \nthe query planer. I think it's plan is way too complex and could much \nfaster?\n\nCREATE TABLE friends (\n id integer NOT NULL,\n user_id integer NOT NULL,\n ref_id integer NOT NULL,\n);\n\nALTER TABLE ONLY friends ADD CONSTRAINT friends_pkey PRIMARY KEY (id);\nCREATE INDEX user_ref ON friends USING btree (user_id, ref_id);\n\nI fill this table with around 2.800.000 random rows (values between 1 \nand 500.000 for user_id, ref_id).\n\nThe intention of the query is to find rows with no \"partner\" row. The \noffset and limit are just to ignore the time needed to send the result \nto the client.\n\nSELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS \nf2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id) OFFSET 1000000 \nLIMIT 1\n\nMysql uses this query plan:\n1 PRIMARY f1 index NULL user_ref 8 NULL \n 2818860 Using where; Using index\n2 DEPENDENT SUBQUERY f2 ref user_ref user_ref 8 \n f1.ref_id,f1.user_id 1 Using index\nTime: 9.8s\n\nPostgre uses this query plan:\n\"Limit (cost=66681.50..66681.50 rows=1 width=139) (actual \ntime=7413.489..7413.489 rows=1 loops=1)\"\n\" -> Merge Anti Join (cost=40520.17..66681.50 rows=367793 width=139) \n(actual time=3705.078..7344.256 rows=1000001 loops=1)\"\n\" Merge Cond: ((f1.user_id = f2.ref_id) AND (f1.ref_id = \nf2.user_id))\"\n\" -> Index Scan using user_ref on friends f1 \n(cost=0.00..26097.86 rows=2818347 width=139) (actual \ntime=0.093..1222.592 rows=1917360 loops=1)\"\n\" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8) \n(actual time=3704.977..5043.347 rows=1990148 loops=1)\"\n\" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8) \n(actual time=3704.970..4710.703 rows=1990148 loops=1)\"\n\" Sort Key: f2.ref_id, f2.user_id\"\n\" Sort Method: external merge Disk: 49576kB\"\n\" -> Seq Scan on friends f2 (cost=0.00..18143.18 \nrows=2818347 width=8) (actual time=0.015..508.797 rows=2818347 loops=1)\"\n\"Total runtime: 7422.516 ms\"\n\nIt's already faster, which is great, but I wonder why the query plan is \nthat complex.\n\nI read in the pqsql docs that using a multicolumn key is almost never \nneeded and only a waste of cpu/space. So I dropped the multicolumn key \nand added to separate keys instead:\n\nCREATE INDEX ref1 ON friends USING btree (ref_id);\nCREATE INDEX user1 ON friends USING btree (user_id);\n\nNew query plan:\n\"Limit (cost=70345.04..70345.04 rows=1 width=139) (actual \ntime=43541.709..43541.709 rows=1 loops=1)\"\n\" -> Merge Anti Join (cost=40520.27..70345.04 rows=367793 width=139) \n(actual time=3356.694..43467.818 rows=1000001 loops=1)\"\n\" Merge Cond: (f1.user_id = f2.ref_id)\"\n\" Join Filter: (f1.ref_id = f2.user_id)\"\n\" -> Index Scan using user1 on friends f1 (cost=0.00..26059.79 \nrows=2818347 width=139) (actual time=0.031..1246.668 rows=1917365 loops=1)\"\n\" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8) \n(actual time=3356.615..14941.405 rows=130503729 loops=1)\"\n\" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8) \n(actual time=3356.611..4127.435 rows=1990160 loops=1)\"\n\" Sort Key: f2.ref_id\"\n\" Sort Method: external merge Disk: 49560kB\"\n\" -> Seq Scan on friends f2 (cost=0.00..18143.18 \nrows=2818347 width=8) (actual time=0.012..496.174 rows=2818347 loops=1)\"\n\"Total runtime: 43550.187 ms\"\n\nAs one can see it's much much slower and only uses one key, not both. I \nthought performance should be almost equal.\n\nI also wonder why it makes a difference when adding a \"LIMIT\" clause to \nthe subselect in an EXISTS subselect. Shouldn't pgsql always stop after \nfinding the a row? In mysql is makes no difference in speed, pgsql even \nget's slower when adding a LIMIT to the EXISTS subselect (I hoped it \nwould get faster?!).\n\nSELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS \nf2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id LIMIT 1) OFFSET \n1000000 LIMIT 1\n\n\"Limit (cost=6389166.19..6389172.58 rows=1 width=139) (actual \ntime=54540.356..54540.356 rows=1 loops=1)\"\n\" -> Seq Scan on friends f1 (cost=0.00..9003446.87 rows=1409174 \nwidth=139) (actual time=0.511..54460.006 rows=1000001 loops=1)\"\n\" Filter: (NOT (SubPlan 1))\"\n\" SubPlan 1\"\n\" -> Limit (cost=2.18..3.19 rows=1 width=0) (actual \ntime=0.029..0.029 rows=0 loops=1832284)\"\n\" -> Bitmap Heap Scan on friends f2 (cost=2.18..3.19 \nrows=1 width=0) (actual time=0.028..0.028 rows=0 loops=1832284)\"\n\" Recheck Cond: (($0 = ref_id) AND ($1 = user_id))\"\n\" -> BitmapAnd (cost=2.18..2.18 rows=1 width=0) \n(actual time=0.027..0.027 rows=0 loops=1832284)\"\n\" -> Bitmap Index Scan on ref1 \n(cost=0.00..1.09 rows=75 width=0) (actual time=0.011..0.011 rows=85 \nloops=1832284)\"\n\" Index Cond: ($0 = ref_id)\"\n\" -> Bitmap Index Scan on user1 \n(cost=0.00..1.09 rows=87 width=0) (actual time=0.011..0.011 rows=87 \nloops=1737236)\"\n\" Index Cond: ($1 = user_id)\"\n\"Total runtime: 54540.431 ms\"\n\nAs in my previous tests, this is only a testing environment: so all data \nis in memory, no disk activity involved at all, no swap etc.\n\nThanks,\nCorin\n\nBTW: I'll respond to the answers of my previous post later.\n\n",
"msg_date": "Fri, 19 Mar 2010 13:26:35 +0100",
"msg_from": "Corin <[email protected]>",
"msg_from_op": true,
"msg_subject": "too complex query plan for not exists query and multicolumn indexes"
},
{
"msg_contents": "Corin <[email protected]> wrote:\n \n> It's already faster, which is great, but I wonder why the query\n> plan is that complex.\n \nBecause that's the plan, out of all the ways the planner knows to\nget the requested result set, which was estimated to cost the least.\nIf it isn't actually the fastest, that might suggest that you\nshould adjust your costing model. Could you tell us more about the\nmachine? Especially useful would be the amount of RAM, what else is\nrunning on the machine, and what the disk system looks like. The\ndefault configuration is almost never optimal for serious production\n-- it's designed to behave reasonably if someone installs on their\ndesktop PC to try it out.\n \n> I read in the pqsql docs that using a multicolumn key is almost\n> never needed and only a waste of cpu/space.\n \nWhere in the docs did you see that?\n \n> As in my previous tests, this is only a testing environment: so\n> all data is in memory, no disk activity involved at all, no swap\n> etc.\n \nAh, that suggests possible configuration changes. You can try these\nout in the session to see the impact, and modify postgresql.conf if\nthey work out.\n \nseq_page_cost = 0.01\nrandom_page_cost = 0.01\neffective_cache_size = <about 3/4 of your machine's RAM>\n \nAlso, make sure that you run VACUUM ANALYZE against the table after\ninitially populating it and before your benchmarks; otherwise you\nmight inadvertently include transient or one-time maintenance costs\nto some benchmarks, or distort behavior by not yet having the\nstatistics present for sane optimizer choices.\n \n-Kevin\n",
"msg_date": "Fri, 19 Mar 2010 08:58:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: too complex query plan for not exists query and\n\tmulticolumn indexes"
},
{
"msg_contents": "Corin,\n\n* Corin ([email protected]) wrote:\n> I fill this table with around 2.800.000 random rows (values between 1 \n> and 500.000 for user_id, ref_id).\n\nUsing random data really isn't a good test.\n\n> The intention of the query is to find rows with no \"partner\" row. The \n> offset and limit are just to ignore the time needed to send the result \n> to the client.\n\nSetting offset and/or limit changes the behaviour of the query. What's\nimportant are the queries your application will actually be doing.\nThese kind of arbitrary tests really aren't a good idea.\n\n> SELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS \n> f2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id) OFFSET 1000000 \n> LIMIT 1\n>\n> Mysql uses this query plan:\n> 1 PRIMARY f1 index NULL user_ref 8 NULL \n> 2818860 Using where; Using index\n> 2 DEPENDENT SUBQUERY f2 ref user_ref user_ref 8 \n> f1.ref_id,f1.user_id 1 Using index\n> Time: 9.8s\n\nYeah, that query plan basically doesn't tell you diddly about what's\ngoing on, if you ask me. The impression I get from this is that it's\nusing the index to do an in-order traversal of the table (why I'm not\nsure..) and then using the index in the subquery to look up each record\nin the table one-by-one. This isn't terribly efficient, and PG manages\nto beat it by being smarter- even with the handicap that it has to go to\nan external on-disk sort (see later on, and how to fix that).\n\n> Postgre uses this query plan:\n> \"Limit (cost=66681.50..66681.50 rows=1 width=139) (actual \n> time=7413.489..7413.489 rows=1 loops=1)\"\n> \" -> Merge Anti Join (cost=40520.17..66681.50 rows=367793 width=139) \n> (actual time=3705.078..7344.256 rows=1000001 loops=1)\"\n> \" Merge Cond: ((f1.user_id = f2.ref_id) AND (f1.ref_id = \n> f2.user_id))\"\n> \" -> Index Scan using user_ref on friends f1 \n> (cost=0.00..26097.86 rows=2818347 width=139) (actual \n> time=0.093..1222.592 rows=1917360 loops=1)\"\n> \" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8) \n> (actual time=3704.977..5043.347 rows=1990148 loops=1)\"\n> \" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8) \n> (actual time=3704.970..4710.703 rows=1990148 loops=1)\"\n> \" Sort Key: f2.ref_id, f2.user_id\"\n> \" Sort Method: external merge Disk: 49576kB\"\n> \" -> Seq Scan on friends f2 (cost=0.00..18143.18 \n> rows=2818347 width=8) (actual time=0.015..508.797 rows=2818347 loops=1)\"\n> \"Total runtime: 7422.516 ms\"\n>\n> It's already faster, which is great, but I wonder why the query plan is \n> that complex.\n\nUh, I'm not sure what you're looking at, but that simply isn't a very\ncomplex query plan. I think what's different is that PG is telling you\nalot more about what is going on than MySQL does. What's happening here\nis this:\n\nSort the table by ref_id and user_id\n\nThen, do an in-order index traversal using the user_ref index, comparing\neach row from the sorted output to each row of the index traversal. It\ndoes that using a merge anti-join (essentially, return records where\nthey don't match, which is what you want). Then it limits the results,\nsince you asked it to.\n\nIf you had an index on ref_id,user_id (as well as the one on\nuser_id,ref_id), it'd probably be able to do in-order index traversals\non both and be really fast... But then updates would be more expensive,\nof course, since it'd have more indexes to maintain.\n\n> I read in the pqsql docs that using a multicolumn key is almost never \n> needed and only a waste of cpu/space. So I dropped the multicolumn key \n> and added to separate keys instead:\n\nUh, no, that's not always the case. It's certainly not something you\ncan just generalize that simply. It's true that PG can use bitmap index\nscans now, which allow it to do individual lookups using multiple\nindexes even when they're not a composite index, but that capability\ndoesn't allow in-order index traversals across multiple columns.\n\n> CREATE INDEX ref1 ON friends USING btree (ref_id);\n> CREATE INDEX user1 ON friends USING btree (user_id);\n>\n> New query plan:\n> \"Limit (cost=70345.04..70345.04 rows=1 width=139) (actual \n> time=43541.709..43541.709 rows=1 loops=1)\"\n> \" -> Merge Anti Join (cost=40520.27..70345.04 rows=367793 width=139) \n> (actual time=3356.694..43467.818 rows=1000001 loops=1)\"\n> \" Merge Cond: (f1.user_id = f2.ref_id)\"\n> \" Join Filter: (f1.ref_id = f2.user_id)\"\n> \" -> Index Scan using user1 on friends f1 (cost=0.00..26059.79 \n> rows=2818347 width=139) (actual time=0.031..1246.668 rows=1917365 \n> loops=1)\"\n> \" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8) \n> (actual time=3356.615..14941.405 rows=130503729 loops=1)\"\n> \" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8) \n> (actual time=3356.611..4127.435 rows=1990160 loops=1)\"\n> \" Sort Key: f2.ref_id\"\n> \" Sort Method: external merge Disk: 49560kB\"\n> \" -> Seq Scan on friends f2 (cost=0.00..18143.18 \n> rows=2818347 width=8) (actual time=0.012..496.174 rows=2818347 loops=1)\"\n> \"Total runtime: 43550.187 ms\"\n>\n> As one can see it's much much slower and only uses one key, not both. I \n> thought performance should be almost equal.\n\nIt only uses 1 key because it can't use 2 independent indexes to do an\nin-order index traversal of the combination. It's an interesting\nquestion as to why it didn't use the ref_id index instead of sorting\nthough, for that half of the merge anti-join, are you sure that you ran\n'analyze;' on the table after creating this set of indexes?\n\n> I also wonder why it makes a difference when adding a \"LIMIT\" clause to \n> the subselect in an EXISTS subselect. Shouldn't pgsql always stop after \n> finding the a row? In mysql is makes no difference in speed, pgsql even \n> get's slower when adding a LIMIT to the EXISTS subselect (I hoped it \n> would get faster?!).\n\nPG is smarter than you're giving it credit for. It's not just going\nthrough each row of table A and then doing a single-row lookup in table\nB for that row. You *force* it to do that when you put a 'limit 1'\ninside the sub-select, hence the performance goes into the toilet. You\ncan see that from the query plan.\n\n> SELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS \n> f2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id LIMIT 1) OFFSET \n> 1000000 LIMIT 1\n>\n> \"Limit (cost=6389166.19..6389172.58 rows=1 width=139) (actual \n> time=54540.356..54540.356 rows=1 loops=1)\"\n> \" -> Seq Scan on friends f1 (cost=0.00..9003446.87 rows=1409174 \n> width=139) (actual time=0.511..54460.006 rows=1000001 loops=1)\"\n> \" Filter: (NOT (SubPlan 1))\"\n> \" SubPlan 1\"\n> \" -> Limit (cost=2.18..3.19 rows=1 width=0) (actual \n> time=0.029..0.029 rows=0 loops=1832284)\"\n> \" -> Bitmap Heap Scan on friends f2 (cost=2.18..3.19 \n> rows=1 width=0) (actual time=0.028..0.028 rows=0 loops=1832284)\"\n> \" Recheck Cond: (($0 = ref_id) AND ($1 = user_id))\"\n> \" -> BitmapAnd (cost=2.18..2.18 rows=1 width=0) \n> (actual time=0.027..0.027 rows=0 loops=1832284)\"\n> \" -> Bitmap Index Scan on ref1 \n> (cost=0.00..1.09 rows=75 width=0) (actual time=0.011..0.011 rows=85 \n> loops=1832284)\"\n> \" Index Cond: ($0 = ref_id)\"\n> \" -> Bitmap Index Scan on user1 \n> (cost=0.00..1.09 rows=87 width=0) (actual time=0.011..0.011 rows=87 \n> loops=1737236)\"\n> \" Index Cond: ($1 = user_id)\"\n> \"Total runtime: 54540.431 ms\"\n\nThis says \"Ok, we're going to step through each row of friends, and then\nrun a subplan on each one of those records\". That's a *horrible* way to\nimplement this query, since it's basically asking for you to test all\nthe records in the table. If there was some selectivity to this (eg: a\nWHERE clause for a specific user_id or something), it'd be alot more\nsane to use this approach. You'll note above that it does actually end\nup using both of your indexes when it's called this way, combining them\nusing a BitmapAnd. For one-off lookups with specific values (as you're\nforcing to happen here), you can have independent indexes that both get\nused. That's what the hint you're talking about above was trying to\npoint out.\n\nNote also that, if I'm right, this is essentially the same query plan\nthat MySQL used above, but with alot more specifics about what's\nactually happening (it's not really any more complicated).\n\n> As in my previous tests, this is only a testing environment: so all data \n> is in memory, no disk activity involved at all, no swap etc.\n\nYeahhhh, system calls still aren't free. I would recommend, if you care\nabout this query, bumping up your work_mem setting for it. Right now,\nPG is using an external sort (meaning- on-disk), but the data set\nappears to only be like 50M (49560kB). If you increased work_mem to,\nsay, 128MB (for this query, maybe or maybe not for the entire system),\nit'd be able to do an in-memory sort (or maybe a hash or something else,\nif it makes sense), which would be faster.\n\nI'd probably rewrite this as a left-join too, to be honest, but based on\nwhat I'm saying, that'd probably get the same query plan as you had\nfirst anyway (the merge anti-join), so it's probably not necessary. I'd\nlove to hear how PG performs with work_mem bumped up to something\ndecent...\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Fri, 19 Mar 2010 14:27:50 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: too complex query plan for not exists query and\n\tmulticolumn indexes"
},
{
"msg_contents": "K.I.S.S. here ..... the best way to do one of these in most DB's is\ntypically an outer join and test for null:\n\nselect f1.* from friends f1\n left outer join friends f2 on (f1.user_id=f2.ref_id and\nf1.ref_id=f2.user_id)\n where f2.id is null;\n\nOn Fri, Mar 19, 2010 at 7:26 AM, Corin <[email protected]> wrote:\n\n> Hi all!\n>\n> While evaluting the pgsql query planer I found some weird behavior of the\n> query planer. I think it's plan is way too complex and could much faster?\n>\n> CREATE TABLE friends (\n> id integer NOT NULL,\n> user_id integer NOT NULL,\n> ref_id integer NOT NULL,\n> );\n>\n> ALTER TABLE ONLY friends ADD CONSTRAINT friends_pkey PRIMARY KEY (id);\n> CREATE INDEX user_ref ON friends USING btree (user_id, ref_id);\n>\n> I fill this table with around 2.800.000 random rows (values between 1 and\n> 500.000 for user_id, ref_id).\n>\n> The intention of the query is to find rows with no \"partner\" row. The\n> offset and limit are just to ignore the time needed to send the result to\n> the client.\n>\n> SELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS f2\n> WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id) OFFSET 1000000 LIMIT 1\n>\n> <snip>\n\nK.I.S.S. here ..... the best way to do one of these in most DB's is typically an outer join and test for null:select f1.* from friends f1 left outer join friends f2 on (f1.user_id=f2.ref_id and f1.ref_id=f2.user_id)\n where f2.id is null;On Fri, Mar 19, 2010 at 7:26 AM, Corin <[email protected]> wrote:\nHi all!\n\nWhile evaluting the pgsql query planer I found some weird behavior of the query planer. I think it's plan is way too complex and could much faster?\n\nCREATE TABLE friends (\n id integer NOT NULL,\n user_id integer NOT NULL,\n ref_id integer NOT NULL,\n);\n\nALTER TABLE ONLY friends ADD CONSTRAINT friends_pkey PRIMARY KEY (id);\nCREATE INDEX user_ref ON friends USING btree (user_id, ref_id);\n\nI fill this table with around 2.800.000 random rows (values between 1 and 500.000 for user_id, ref_id).\n\nThe intention of the query is to find rows with no \"partner\" row. The offset and limit are just to ignore the time needed to send the result to the client.\n\nSELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS f2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id) OFFSET 1000000 LIMIT 1\n<snip>",
"msg_date": "Fri, 19 Mar 2010 14:12:49 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: too complex query plan for not exists query and\n\tmulticolumn indexes"
},
{
"msg_contents": "On Fri, 19 Mar 2010, Stephen Frost wrote:\n> ...it has to go to an external on-disk sort (see later on, and how to \n> fix that).\n\nThis was covered on this list a few months ago, in \nhttp://archives.postgresql.org/pgsql-performance/2009-08/msg00184.php and \nhttp://archives.postgresql.org/pgsql-performance/2009-08/msg00189.php\n\nThere seemed to be some consensus that allowing a materialise in front of \nan index scan might have been a good change. Was there any movement on \nthis front?\n\n>> \"Limit (cost=66681.50..66681.50 rows=1 width=139) (actual\n>> time=7413.489..7413.489 rows=1 loops=1)\"\n>> \" -> Merge Anti Join (cost=40520.17..66681.50 rows=367793 width=139)\n>> (actual time=3705.078..7344.256 rows=1000001 loops=1)\"\n>> \" Merge Cond: ((f1.user_id = f2.ref_id) AND (f1.ref_id =\n>> f2.user_id))\"\n>> \" -> Index Scan using user_ref on friends f1\n>> (cost=0.00..26097.86 rows=2818347 width=139) (actual\n>> time=0.093..1222.592 rows=1917360 loops=1)\"\n>> \" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8)\n>> (actual time=3704.977..5043.347 rows=1990148 loops=1)\"\n>> \" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8)\n>> (actual time=3704.970..4710.703 rows=1990148 loops=1)\"\n>> \" Sort Key: f2.ref_id, f2.user_id\"\n>> \" Sort Method: external merge Disk: 49576kB\"\n>> \" -> Seq Scan on friends f2 (cost=0.00..18143.18\n>> rows=2818347 width=8) (actual time=0.015..508.797 rows=2818347 loops=1)\"\n>> \"Total runtime: 7422.516 ms\"\n\n> If you had an index on ref_id,user_id (as well as the one on\n> user_id,ref_id), it'd probably be able to do in-order index traversals\n> on both and be really fast... But then updates would be more expensive,\n> of course, since it'd have more indexes to maintain.\n\nThat isn't necessarily so, until the issue referred to in the above linked \nmessages is resolved. It depends.\n\nMatthew\n\n-- \n I've run DOOM more in the last few days than I have the last few\n months. I just love debugging ;-) -- Linus Torvalds\n",
"msg_date": "Mon, 22 Mar 2010 11:48:44 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: too complex query plan for not exists query and\n\tmulticolumn indexes"
},
{
"msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Fri, 19 Mar 2010, Stephen Frost wrote:\n>> ...it has to go to an external on-disk sort (see later on, and how to \n>> fix that).\n\n> This was covered on this list a few months ago, in \n> http://archives.postgresql.org/pgsql-performance/2009-08/msg00184.php and \n> http://archives.postgresql.org/pgsql-performance/2009-08/msg00189.php\n\n> There seemed to be some consensus that allowing a materialise in front of \n> an index scan might have been a good change. Was there any movement on \n> this front?\n\nYes, 9.0 will consider plans like\n\n Merge Join (cost=0.00..14328.70 rows=1000000 width=488)\n Merge Cond: (a.four = b.hundred)\n -> Index Scan using fouri on tenk1 a (cost=0.00..1635.62 rows=10000 width=244)\n -> Materialize (cost=0.00..1727.16 rows=10000 width=244)\n -> Index Scan using tenk1_hundred on tenk1 b (cost=0.00..1702.16 rows\n=10000 width=244)\n\nSome experimentation shows that it won't insert the materialize unless\nquite a bit of re-fetching is predicted (ie neither side of the join is\nunique). We might need to tweak the cost parameters once we get some\nfield experience with it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Mar 2010 08:40:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: too complex query plan for not exists query and multicolumn\n\tindexes"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgreSQL 8.4.2 / default_statistics_target = 300\n\nI have a strange problem for a bad choose of indexes.\n\nclient=# \\d ct13t\n Table \"public.ct13t\"\n Column | Type | Modifiers\n------------+--------------+-----------\n ct12emp04 | integer | not null\n ct03emp01 | integer | not null\n ct03tradut | integer | not null\n ct07emp01 | integer | not null\n ct07c_cust | integer | not null\n ct13dtlanc | date | not null\n ct12numlot | integer | not null\n ct12numlan | integer | not null\n ct13emptr1 | integer |\n ct13tradu1 | integer |\n ct13empcc1 | integer |\n ct13ccust1 | integer |\n ct13duoc | character(1) |\nIndexes:\n \"ct13t_pkey\" PRIMARY KEY, btree (ct12emp04, ct03emp01, ct03tradut,\nct07emp01, ct07c_cust, ct13dtlanc, ct12numlot, ct12numlan) CLUSTER\n \"ict13t1\" btree (ct12emp04, ct12numlot, ct12numlan)\n \"ict13t2\" btree (ct07emp01, ct07c_cust)\n \"ict13t3\" btree (ct13empcc1, ct13ccust1)\n \"ict13t4\" btree (ct03emp01, ct03tradut)\n \"ict13t5\" btree (ct13emptr1, ct13tradu1)\n \"uct13t\" btree (ct12emp04, ct13dtlanc)\n\n\nclient=# explain analyze SELECT ct12emp04, ct03emp01, ct03tradut,\nct07emp01, ct07c_cust, ct13dtlanc, ct12numlot, ct12numlan FROM CT13T\nWHERE ct12emp04 = '2' AND ct03emp01 = '2' AND ct03tradut = '60008' AND\nct07emp01 = '2' AND ct07c_cust = '0' AND ct13dtlanc =\n'2005-01-28'::date AND ct12numlot = '82050128' AND ct12numlan = '123';\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ict13t2 on ct13t (cost=0.00..5.69 rows=1 width=32)\n(actual time=288.687..288.687 rows=0 loops=1)\n Index Cond: ((ct07emp01 = 2) AND (ct07c_cust = 0))\n Filter: ((ct12emp04 = 2) AND (ct03emp01 = 2) AND (ct03tradut =\n60008) AND (ct13dtlanc = '2005-01-28'::date) AND (ct12numlot =\n82050128) AND (ct12numlan = 123))\n Total runtime: 288.735 ms\n(4 rows)\n\nclient=# create table ad_ct13t as select * from ct13t;\nSELECT\nclient=# alter table ad_ct13t add primary key (ct12emp04, ct03emp01,\nct03tradut, ct07emp01, ct07c_cust, ct13dtlanc, ct12numlot,\nct12numlan);\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n\"ad_ct13t_pkey\" for table \"ad_ct13t\"\nALTER TABLE\nclient=# explain analyze SELECT ct12emp04, ct03emp01, ct03tradut,\nct07emp01, ct07c_cust, ct13dtlanc, ct12numlot, ct12numlan FROM\nAD_CT13T WHERE ct12emp04 = '2' AND ct03emp01 = '2' AND ct03tradut =\n'60008' AND ct07emp01 = '2' AND ct07c_cust = '0' AND ct13dtlanc =\n'2005-01-28'::date AND ct12numlot = '82050128' AND ct12numlan = '123';\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-\n Index Scan using ad_ct13t_pkey on ad_ct13t (cost=0.00..5.66 rows=1\nwidth=32) (actual time=0.090..0.090 rows=0 loops=1)\n Index Cond: ((ct12emp04 = 2) AND (ct03emp01 = 2) AND (ct03tradut =\n60008) AND (ct07emp01 = 2) AND (ct07c_cust = 0) AND (ct13dtlanc =\n'2005-01-28'::date) AND (ct12numlot = 82050128) AND (ct12numlan =\n123))\n Total runtime: 0.146 ms\n(3 rows)\n\nMy question: if the cost is exactly the same, why PG choose the index\nict13t2 on ct13t and apply a filter instead use the primary key ?\nIn one query, it's ok. But this routine execute millions times this query.\n\nThanks for any help,\n\nAlexandre\n",
"msg_date": "Fri, 19 Mar 2010 10:45:50 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG using index+filter instead only use index"
},
{
"msg_contents": "Alexandre de Arruda Paes <[email protected]> writes:\n> My question: if the cost is exactly the same, why PG choose the index\n> ict13t2 on ct13t and apply a filter instead use the primary key ?\n\nWhy shouldn't it, if the estimated costs are the same? You didn't\nactually demonstrate they're the same though.\n\nThe cost estimates look a bit unusual to me; are you using nondefault\ncost parameters, and if so what are they?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Mar 2010 15:49:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG using index+filter instead only use index "
},
{
"msg_contents": "Hi Tom,\n\n2010/3/19 Tom Lane <[email protected]>:\n> Alexandre de Arruda Paes <[email protected]> writes:\n>> My question: if the cost is exactly the same, why PG choose the index\n>> ict13t2 on ct13t and apply a filter instead use the primary key ?\n>\n> Why shouldn't it, if the estimated costs are the same? You didn't\n> actually demonstrate they're the same though.\n>\n> The cost estimates look a bit unusual to me; are you using nondefault\n> cost parameters, and if so what are they?\n>\n> regards, tom lane\n>\n\nThe non default value in cost parameters is different only in\nrandom_page_cost that are set to 2.5 and default_statistics_target set\nto 300.\nI set this parameters to defaults (4 and 100) and re-analyze the\ntables but results are the same.\n\nSome more info on another table with the same behavior (ANALYZE ok in\nall tables):\n\nclient=# \\d ct14t\n Table \"public.ct14t\"\n Column | Type | Modifiers\n------------+---------------+-----------\n ct14emp04 | integer | not null\n ct03emp01 | integer | not null\n ct03tradut | integer | not null\n ct07emp01 | integer | not null\n ct07c_cust | integer | not null\n ct14ano | integer | not null\n ct14mes | integer | not null\n ct14debito | numeric(14,2) |\n ct14credit | numeric(14,2) |\n ct14orcado | numeric(14,2) |\nIndexes:\n \"ct14t_pkey\" PRIMARY KEY, btree (ct14emp04, ct03emp01, ct03tradut,\nct07emp01, ct07c_cust, ct14ano, ct14mes) CLUSTER\n \"ad_ict14t\" btree (ct14emp04, ct03emp01, ct03tradut, ct07emp01,\nct07c_cust, ct14ano, ct14mes) WHERE ct14emp04 = 2 AND ct03emp01 = 2\nAND ct07emp01 = 2\n \"ict14t1\" btree (ct07emp01, ct07c_cust)\n \"ict14t2\" btree (ct03emp01, ct03tradut)\n\nclient=# select ct07c_cust,count(*) from ct14t group by ct07c_cust\norder by count(*) DESC;\n ct07c_cust | count\n------------+-------\n 0 | 55536\n 99 | 14901\n 107 | 3094\n 800 | 1938\n(...)\n\n\nIf I use any different value from '0' in the ct07c_cust field, the\nplanner choose the 'right' index:\n\nclient=# explain analyze SELECT ct14mes, ct14ano, ct07c_cust,\nct07emp01, ct03tradut, ct03emp01, ct14emp04, ct14debito, ct14credit\nFROM ad_CT14T WHERE ct14emp04 = '2' AND ct03emp01 = '2' AND ct03tradut\n= '14930' AND ct07emp01 = '2' AND ct07c_cust = '99' AND ct14ano =\n'2003' AND ct14mes = '4';\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ad_ict14t_1 on ad_ct14t (cost=0.00..5.28 rows=1\nwidth=42) (actual time=5.504..5.504 rows=0 loops=1)\n Index Cond: ((ct14emp04 = 2) AND (ct03emp01 = 2) AND (ct03tradut =\n14930) AND (ct07emp01 = 2) AND (ct07c_cust = 99) AND (ct14ano = 2003)\nAND (ct14mes = 4))\n Total runtime: 5.548 ms\n(3 rows)\n\n\n\nWith '0' in the ct07c_cust field, they choose a more slow way:\n\n\nclient=# explain analyze SELECT ct14mes, ct14ano, ct07c_cust,\nct07emp01, ct03tradut, ct03emp01, ct14emp04, ct14debito, ct14credit\nFROM CT14T WHERE ct14emp04 = '2' AND ct03emp01 = '2' AND ct03tradut =\n'57393' AND ct07emp01 = '2' AND ct07c_cust = '0' AND ct14ano = '2002'\nAND ct14mes = '5';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Index Scan using ict14t1 on ct14t (cost=0.00..5.32 rows=1 width=42)\n(actual time=211.007..211.007 rows=0 loops=1)\n Index Cond: ((ct07emp01 = 2) AND (ct07c_cust = 0))\n Filter: ((ct14emp04 = 2) AND (ct03emp01 = 2) AND (ct03tradut =\n57393) AND (ct14ano = 2002) AND (ct14mes = 5))\n Total runtime: 211.062 ms\n(4 rows)\n\n\nAgain, if I create a table for test from this table (AD_CT14T) and\nonly create the index used in the first query plan, the results are ok\n(ct07c_cust=0 / same query above):\n\nclient=# create table ad_ct14t as select * from ct14t;\nSELECT\nclient=# create index ad_ict14t_abc on ad_ct14t(ct14emp04, ct03emp01,\nct03tradut, ct07emp01, ct07c_cust, ct14ano, ct14mes) where ct14emp04 =\n'2' AND ct03emp01 = '2' AND ct07emp01 = '2';\nCREATE\nclient=# explain analyze SELECT ct14mes, ct14ano, ct07c_cust,\nct07emp01, ct03tradut, ct03emp01, ct14emp04, ct14debito, ct14credit\nFROM AD_CT14T WHERE ct14emp04 = '2' AND ct03emp01 = '2' AND ct03tradut\n= '57393' AND ct07emp01 = '2' AND ct07c_cust = '0' AND ct14ano =\n'2002' AND ct14mes = '5';\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ad_ict14t_abc on ad_ct14t (cost=0.00..5.28 rows=1\nwidth=42) (actual time=0.043..0.043 rows=0 loops=1)\n Index Cond: ((ct14emp04 = 2) AND (ct03emp01 = 2) AND (ct03tradut =\n57393) AND (ct07emp01 = 2) AND (ct07c_cust = 0) AND (ct14ano = 2002)\nAND (ct14mes = 5))\n Total runtime: 0.091 ms\n(3 rows)\n\n\n\nI don't know why the planner prefer to use a less specific index\n(ict14t1) and do a filter than use an index that matches with the\nWHERE parameter...\n\nBest regards,\n\nAlexandre\n",
"msg_date": "Fri, 19 Mar 2010 18:04:46 -0300",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG using index+filter instead only use index"
},
{
"msg_contents": "Alexandre de Arruda Paes <[email protected]> writes:\n> 2010/3/19 Tom Lane <[email protected]>:\n>> The cost estimates look a bit unusual to me; are you using nondefault\n>> cost parameters, and if so what are they?\n\n> The non default value in cost parameters is different only in\n> random_page_cost that are set to 2.5 and default_statistics_target set\n> to 300.\n\nOkay, so with random_page_cost = 2.5, those cost estimates definitely\nindicate that it's expecting only one heap tuple to be visited, for\neither choice of index.\n\n> I don't know why the planner prefer to use a less specific index\n> (ict14t1) and do a filter than use an index that matches with the\n> WHERE parameter...\n\nThe cost estimate formulas bias the system against using a larger index\nwhen a smaller one will do. That seven-column index is probably at\nleast twice as large as the two-column index, so it's hardly\nunreasonable to assume that scanning it will take more I/O and cache\nspace and CPU time than using a smaller index, if all else is equal.\nNow of course all else is not equal if the smaller index is less\nselective than the larger one, but the cost estimates indicate that the\nplanner thinks the two-column index condition is sufficient to narrow\nthings down to only one heap tuple anyway.\n\nThe fact that the smaller index is actually slower indicates that this\nestimate is off, ie (ct07emp01 = 2) AND (ct07c_cust = 0) actually\nselects more than one heap tuple. It's hard to speculate about why that\nestimate is wrong on the basis of the information you've shown us\nthough. Perhaps there is a strong correlation between the values of\nthose two columns?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Mar 2010 22:15:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG using index+filter instead only use index "
}
] |
[
{
"msg_contents": "---- Message from Corin <[email protected]> at 03-19-2010 01:26:35 PM\n------\n\n***snip ****\nThe intention of the query is to find rows with no \"partner\" row. The\noffset and limit are just to ignore the time needed to send the result\nto the client.\n-------\nI don't understand the point of OFFSET, limit will accomplish the same\nthing, PG will still execute the query the only difference is PG will skip\nthe step to count through the first million rows before returning a record.\n\n-------\n\nSELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS\nf2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id) OFFSET 1000000\nLIMIT 1\n\nMysql uses this query plan:\n1 PRIMARY f1 index NULL user_ref 8 NULL\n 2818860 Using where; Using index\n2 DEPENDENT SUBQUERY f2 ref user_ref user_ref 8\n f1.ref_id,f1.user_id 1 Using index\nTime: 9.8s\n\n-------\nif that's a query explain in Mysql its worthless. The above has no\ninformation, does not tell us how long each step is taking, let alone what\nit was thinking it would take to make the query work .\n------\n\nPostgre uses this query plan:\n\"Limit (cost=66681.50..66681.50 rows=1 width=139) (actual\ntime=7413.489..7413.489 rows=1 loops=1)\"\n\" -> Merge Anti Join (cost=40520.17..66681.50 rows=367793 width=139)\n(actual time=3705.078..7344.256 rows=1000001 loops=1)\"\n\" *Merge Cond: ((f1.user_id = f2.ref_id) AND (f1.ref_id =\nf2.user_id))\"*\n\" -> Index Scan using user_ref on friends f1\n(cost=0.00..26097.86 rows=2818347 width=139) (actual\ntime=0.093..1222.592 rows=1917360 loops=1)\"\n\" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8)\n(actual time=3704.977..5043.347 rows=1990148 loops=1)\"\n\" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8)\n(actual time=3704.970..4710.703 rows=1990148 loops=1)\"\n\" Sort Key: f2.ref_id, f2.user_id\"\n\" Sort Method: external merge Disk: 49576kB\"\n\" -> Seq Scan on friends f2 (cost=0.00..18143.18\nrows=2818347 width=8) (actual time=0.015..508.797 rows=2818347 loops=1)\"\n\"Total runtime: 7422.516 ms\"\n\n---\nWe can see each step PG takes and make inform decisions what part of the\nquery is slow . We can See the Sorting the rows takes most of the time\n---\n\nIt's already faster, which is great, but I wonder why the query plan is\nthat complex.\n\n----\nIts not complex it showing you all the steps which Mysql is not showing you\n----\n\nI read in the pqsql docs that using a multicolumn key is almost never\nneeded and only a waste of cpu/space. So I dropped the multicolumn key\nand added to separate keys instead:\n\n----\nWhere is that at??? I don't recall reading that. PG will only use indexes\nthat match exactly where/join conditions.\n\n----\n\nCREATE INDEX ref1 ON friends USING btree (ref_id);\nCREATE INDEX user1 ON friends USING btree (user_id);\n\nNew query plan:\n\"Limit (cost=70345.04..70345.04 rows=1 width=139) (actual\ntime=43541.709..43541.709 rows=1 loops=1)\"\n\" -> Merge Anti Join (cost=40520.27..70345.04 rows=367793 width=139)\n(actual time=3356.694..43467.818 rows=1000001 loops=1)\"\n\" * Merge Cond: (f1.user_id = f2.ref_id)\"\n\" Join Filter: (f1.ref_id = f2.user_id)\"\n---\n*take note the merge has changed. it now joins on f1.user_id=f2.ref_id then\nfilters the results down by using the AND condition. Put the index back *\n---\n*\" -> Index Scan using user1 on friends f1 (cost=0.00..26059.79\nrows=2818347 width=139) (actual time=0.031..1246.668 rows=1917365 loops=1)\"\n\" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8)\n(actual time=3356.615..14941.405* rows=130503729* loops=1)\"\n---\ntake note look at what happened here. this because the of Join is not\nlimited as it was before.\ndid you run this query against Mysql with the same kind of indexes???\n-----\n\" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8)\n(actual time=3356.611..4127.435 rows=1990160 loops=1)\"\n\" Sort Key: f2.ref_id\"\n\" Sort Method: external merge Disk: 49560kB\"\n\" -> Seq Scan on friends f2 (cost=0.00..18143.18\nrows=2818347 width=8) (actual time=0.012..496.174 rows=2818347 loops=1)\"\n\"Total runtime: 43550.187 ms\"\n\nI also wonder why it makes a difference when adding a \"LIMIT\" clause to\nthe subselect in an EXISTS subselect. Shouldn't pgsql always stop after\nfinding the a row? In mysql is makes no difference in speed, pgsql even\nget's slower when adding a LIMIT to the EXISTS subselect (I hoped it\nwould get faster?!).\n\n\n----\nLimits occur last after doing all the major work is done\n----\n\n\nSELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends AS\nf2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id LIMIT 1) OFFSET\n1000000 LIMIT 1\n\n\"Limit (cost=6389166.19..6389172.58 rows=1 width=139) (actual\ntime=54540.356..54540.356 rows=1 loops=1)\"\n\" -> Seq Scan on friends f1 (cost=0.00..9003446.87 rows=1409174\nwidth=139) (actual time=0.511..54460.006 rows=1000001 loops=1)\"\n\" Filter: (NOT (SubPlan 1))\"\n\" * SubPlan 1\"\n\" -> Limit (cost=2.18..3.19 rows=1 width=0) (actual\ntime=0.029..0.029 rows=0 loops=1832284)\"*\n---\nthis caused PG to create a subplan which resulted in a more complex plan\nPG is doing lots more work\n---\n\" -> Bitmap Heap Scan on friends f2 (cost=2.18..3.19\nrows=1 width=0) (actual time=0.028..0.028 rows=0 loops=1832284)\"\n\" Recheck Cond: (($0 = ref_id) AND ($1 = user_id))\"\n\" -> BitmapAnd (cost=2.18..2.18 rows=1 width=0)\n(actual time=0.027..0.027 rows=0 loops=1832284)\"\n\" -> Bitmap Index Scan on ref1\n(cost=0.00..1.09 rows=75 width=0) (actual time=0.011..0.011 rows=85\nloops=1832284)\"\n\" Index Cond: ($0 = ref_id)\"\n\" -> Bitmap Index Scan on user1\n(cost=0.00..1.09 rows=87 width=0) (actual time=0.011..0.011 rows=87\nloops=1737236)\"\n\" Index Cond: ($1 = user_id)\"\n\"Total runtime: 54540.431 ms\"\n\n---\nthis plan is not even close to the others\n---\n\n---- Message from Corin <[email protected]> at 03-19-2010 01:26:35 PM ------***snip ****The intention of the query is to find rows with no \"partner\" row. The\noffset and limit are just to ignore the time needed to send the resultto the client.-------I don't understand the point of OFFSET, limit will accomplish the same thing, PG will still execute the query the only difference is PG will skip the step to count through the first million rows before returning a record. \n-------SELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends ASf2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id) OFFSET 1000000LIMIT 1Mysql uses this query plan:1 PRIMARY f1 index NULL user_ref 8 NULL\n 2818860 Using where; Using index2 DEPENDENT SUBQUERY f2 ref user_ref user_ref 8 f1.ref_id,f1.user_id 1 Using indexTime: 9.8s-------if that's a query explain in Mysql its worthless. The above has no information, does not tell us how long each step is taking, let alone what it was thinking it would take to make the query work .\n------Postgre uses this query plan:\"Limit (cost=66681.50..66681.50 rows=1 width=139) (actualtime=7413.489..7413.489 rows=1 loops=1)\"\" -> Merge Anti Join (cost=40520.17..66681.50 rows=367793 width=139)\n(actual time=3705.078..7344.256 rows=1000001 loops=1)\"\" Merge Cond: ((f1.user_id = f2.ref_id) AND (f1.ref_id =f2.user_id))\"\" -> Index Scan using user_ref on friends f1 \n(cost=0.00..26097.86 rows=2818347 width=139) (actualtime=0.093..1222.592 rows=1917360 loops=1)\"\" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8)(actual time=3704.977..5043.347 rows=1990148 loops=1)\"\n\" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8)(actual time=3704.970..4710.703 rows=1990148 loops=1)\"\" Sort Key: f2.ref_id, f2.user_id\"\" Sort Method: external merge Disk: 49576kB\"\n\" -> Seq Scan on friends f2 (cost=0.00..18143.18rows=2818347 width=8) (actual time=0.015..508.797 rows=2818347 loops=1)\"\"Total runtime: 7422.516 ms\"---We can see each step PG takes and make inform decisions what part of the query is slow . We can See the Sorting the rows takes most of the time \n---It's already faster, which is great, but I wonder why the query plan isthat complex.----Its not complex it showing you all the steps which Mysql is not showing you\n----I read in the pqsql docs that using a multicolumn key is almost neverneeded and only a waste of cpu/space. So I dropped the multicolumn keyand added to separate keys instead:----Where is that at??? I don't recall reading that. PG will only use indexes that match exactly where/join conditions. \n----CREATE INDEX ref1 ON friends USING btree (ref_id);CREATE INDEX user1 ON friends USING btree (user_id);New query plan:\"Limit (cost=70345.04..70345.04 rows=1 width=139) (actualtime=43541.709..43541.709 rows=1 loops=1)\"\n\" -> Merge Anti Join (cost=40520.27..70345.04 rows=367793 width=139)(actual time=3356.694..43467.818 rows=1000001 loops=1)\"\" Merge Cond: (f1.user_id = f2.ref_id)\"\" Join Filter: (f1.ref_id = f2.user_id)\"\n---take note the merge has changed. it now joins on f1.user_id=f2.ref_id then filters the results down by using the AND condition. Put the index back ---\" -> Index Scan using user1 on friends f1 (cost=0.00..26059.79\nrows=2818347 width=139) (actual time=0.031..1246.668 rows=1917365 loops=1)\"\" -> Materialize (cost=40520.17..40555.40 rows=2818347 width=8)(actual time=3356.615..14941.405 rows=130503729 loops=1)\"\n---take note look at what happened here. this because the of Join is not limited as it was before. did you run this query against Mysql with the same kind of indexes???\n-----\" -> Sort (cost=40520.17..40527.21 rows=2818347 width=8)(actual time=3356.611..4127.435 rows=1990160 loops=1)\"\" Sort Key: f2.ref_id\"\" Sort Method: external merge Disk: 49560kB\"\n\" -> Seq Scan on friends f2 (cost=0.00..18143.18rows=2818347 width=8) (actual time=0.012..496.174 rows=2818347 loops=1)\"\"Total runtime: 43550.187 ms\"I also wonder why it makes a difference when adding a \"LIMIT\" clause to\nthe subselect in an EXISTS subselect. Shouldn't pgsql always stop afterfinding the a row? In mysql is makes no difference in speed, pgsql evenget's slower when adding a LIMIT to the EXISTS subselect (I hoped it\nwould get faster?!).----\nLimits occur last after doing all the major work is done \n----SELECT * FROM friends AS f1 WHERE NOT EXISTS (SELECT 1 FROM friends ASf2 WHERE f1.user_id=f2.ref_id AND f1.ref_id=f2.user_id LIMIT 1) OFFSET1000000 LIMIT 1\"Limit (cost=6389166.19..6389172.58 rows=1 width=139) (actual\ntime=54540.356..54540.356 rows=1 loops=1)\"\" -> Seq Scan on friends f1 (cost=0.00..9003446.87 rows=1409174width=139) (actual time=0.511..54460.006 rows=1000001 loops=1)\"\" Filter: (NOT (SubPlan 1))\"\n\" SubPlan 1\"\" -> Limit (cost=2.18..3.19 rows=1 width=0) (actualtime=0.029..0.029 rows=0 loops=1832284)\"---this caused PG to create a subplan which resulted in a more complex plan\nPG is doing lots more work ---\" -> Bitmap Heap Scan on friends f2 (cost=2.18..3.19rows=1 width=0) (actual time=0.028..0.028 rows=0 loops=1832284)\"\" Recheck Cond: (($0 = ref_id) AND ($1 = user_id))\"\n\" -> BitmapAnd (cost=2.18..2.18 rows=1 width=0)(actual time=0.027..0.027 rows=0 loops=1832284)\"\" -> Bitmap Index Scan on ref1 (cost=0.00..1.09 rows=75 width=0) (actual time=0.011..0.011 rows=85\nloops=1832284)\"\" Index Cond: ($0 = ref_id)\"\" -> Bitmap Index Scan on user1 (cost=0.00..1.09 rows=87 width=0) (actual time=0.011..0.011 rows=87\nloops=1737236)\"\" Index Cond: ($1 = user_id)\"\"Total runtime: 54540.431 ms\"---this plan is not even close to the others \n---",
"msg_date": "Fri, 19 Mar 2010 13:02:05 -0400",
"msg_from": "Justin Graf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: too complex query plan for not exists query and\n\tmulticolumn indexes"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have compiled PostgreSQL 8.4 from source code and in order to install\npgbench, I go under contrib folder and run below commands:\nmake\nmake install\nwhen I write pgbench as a command system cannot find pgbench as a command.\nAs a result I cannot use pgbench-tools because system does not interpret\npgbench as a command?\n\nWhat should I do to make the pgbench a command, shall I edit .bashrc file,\nif yes how?\nThanks in advance for all answers,\nBest Regards,\n --\nReydan Çankur\n\nHi All,\n \nI have compiled PostgreSQL 8.4 from source code and in order to install pgbench, I go under contrib folder and run below commands:\nmakemake install\nwhen I write pgbench as a command system cannot find pgbench as a command.\nAs a result I cannot use pgbench-tools because system does not interpret pgbench as a command?\n \nWhat should I do to make the pgbench a command, shall I edit .bashrc file, if yes how?\nThanks in advance for all answers,\nBest Regards,\n -- Reydan Çankur",
"msg_date": "Sat, 20 Mar 2010 06:50:38 +0200",
"msg_from": "Reydan Cankur <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench installation"
},
{
"msg_contents": "Reydan Cankur wrote:\n>\n> I have compiled PostgreSQL 8.4 from source code and in order to \n> install pgbench, I go under contrib folder and run below commands:\n> make\n> make install\n> when I write pgbench as a command system cannot find pgbench as a command.\n\nDo regular PostgreSQL command such as psql work? pgbench should be \ninstalled into the same directory those are.\n\nNormally you just need to add the directory all these programs are in to \nyour PATH environment variable when you login. If you're not sure where \nthat is, you could re-install pgbench and note where it puts it at when \nyou reinstall:\n\ncd contrib/pgbench\nmake clean\nmake\nmake install\n\nOr you could use a system utility such as \"locate pgbench\" or \"find\" to \nfigure out where it went at.\n\n> As a result I cannot use pgbench-tools because system does not \n> interpret pgbench as a command?\n\nYou don't actually need pgbench to be in your PATH for pgbench-tools to \nwork. If you look at the \"config\" file it uses, you'll find this:\n\nPGBENCHBIN=`which pgbench`\n\nIf pgbench is in your PATH, this will return its location, and since \nthat's normally the case for people running PostgreSQL it's the \ndefault. But you could alternately update this to include a direct \nlocation instead. For example, if pgbench was stored in your home \ndirectory, something like this would work:\n\nPGBENCHBIN=\"/home/me/pgsql/bin/pgbench\"\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 22 Mar 2010 01:00:44 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench installation"
}
] |
[
{
"msg_contents": "If you are really so desparate to save a couple of GB that you are resorting\nto -Z9 then I'd suggest using bzip2 instead.\n\nbzip is designed for things like installer images where there will be\nmassive amounts of downloads, so it uses a ton of cpu during compression,\nbut usually less than -Z9 and makes a better result.\n\nCheers\nDave\n\nOn Mar 21, 2010 10:50 AM, \"David Newall\" <[email protected]> wrote:\n\nTom Lane wrote:\n>\n> I would bet that the reason for the slow throughput is that gzip\n> is fruitlessl...\nIndeed, I didn't expect much reduction in size, but I also didn't expect a\nfour-order of magnitude increase in run-time (i.e. output at 10MB/second\ngoing down to 500KB/second), particularly as my estimate was based on\ngzipping a previously gzipped file. I think it's probably pathological\ndata, as it were. Might even be of interest to gzip's maintainers.\n\nIf you are really so desparate to save a couple of GB that you are resorting to -Z9 then I'd suggest using bzip2 instead.\nbzip is designed for things like installer images where there will be massive amounts of downloads, so it uses a ton of cpu during compression, but usually less than -Z9 and makes a better result.\nCheers\nDave \nOn Mar 21, 2010 10:50 AM, \"David Newall\" <[email protected]> wrote:Tom Lane wrote:>\n> I would bet that the reason for the slow throughput is that gzip> is fruitlessl...\nIndeed, I didn't expect much reduction in size, but I also didn't expect a four-order of magnitude increase in run-time (i.e. output at 10MB/second going down to 500KB/second), particularly as my estimate was based on gzipping a previously gzipped file. I think it's probably pathological data, as it were. Might even be of interest to gzip's maintainers.",
"msg_date": "Sun, 21 Mar 2010 12:04:00 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "GZIP of pre-zipped output"
},
{
"msg_contents": "On 22/03/2010 1:04 AM, Dave Crooke wrote:\n> If you are really so desparate to save a couple of GB that you are\n> resorting to -Z9 then I'd suggest using bzip2 instead.\n>\n> bzip is designed for things like installer images where there will be\n> massive amounts of downloads, so it uses a ton of cpu during\n> compression, but usually less than -Z9 and makes a better result.\n\nbzip2 doesn't work very well on gzip'd (deflated) data, though. For good \nresults, you'd want to feed it uncompressed data, which is a bit of a \npain when the compression is part of the PDF document structure and when \nyou otherwise want the PDFs to remain compressed.\n\nAnyway, if you're going for extreme compression, these days 7zip is \noften a better option than bzip2.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 22 Mar 2010 10:46:32 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GZIP of pre-zipped output"
},
{
"msg_contents": "On Sun, Mar 21, 2010 at 8:46 PM, Craig Ringer\n<[email protected]> wrote:\n> On 22/03/2010 1:04 AM, Dave Crooke wrote:\n>>\n>> If you are really so desparate to save a couple of GB that you are\n>> resorting to -Z9 then I'd suggest using bzip2 instead.\n>>\n>> bzip is designed for things like installer images where there will be\n>> massive amounts of downloads, so it uses a ton of cpu during\n>> compression, but usually less than -Z9 and makes a better result.\n>\n> bzip2 doesn't work very well on gzip'd (deflated) data, though. For good\n> results, you'd want to feed it uncompressed data, which is a bit of a pain\n> when the compression is part of the PDF document structure and when you\n> otherwise want the PDFs to remain compressed.\n>\n> Anyway, if you're going for extreme compression, these days 7zip is often a\n> better option than bzip2.\n\nThere's often a choice of two packages, 7z, and 7za, get 7za, it's the\nlater model version.\n",
"msg_date": "Sun, 21 Mar 2010 21:00:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GZIP of pre-zipped output"
}
] |
[
{
"msg_contents": "I previously posted 'forcing index scan on query produces 16x faster' \nand it seemed that the consensus was that 8.0.x series had an issue. I \nhave upgraded to the highest practical version for our distro. But we \nseem to have the same issue.\n\nIf I force the 'enable_seqscan' off our actual time is 9ms where if \n'enable_seqscan' is on the performance is 2200ms ( the good news is the \nSeq Scan query on 8.2 is 1/2 the time of the 8.0 query ).\n\n\nThe paste is below - I reloaded the table from scratch after the 8.2 \nupgrade. Then I ran a 'REINDEX DATABASE' and a 'VACUUM ANALYZE' (then \nran some queries and reran the vac analyze).\n\n\n\npostream=> SELECT version();\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.11 on i386-redhat-linux-gnu, compiled by GCC gcc (GCC) \n4.1.2 20070925 (Red Hat 4.1.2-33)\n(1 row)\n\npostream=> SET enable_seqscan = false;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count, \nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) + \ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + \ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=426973.65..426973.86 rows=14 width=35) (actual \ntime=9.424..9.438 rows=12 loops=1)\n -> Nested Loop (cost=0.01..426245.31 rows=97113 width=35) (actual \ntime=0.653..6.954 rows=894 loops=1)\n -> Nested Loop (cost=0.01..2416.59 rows=22477 width=4) \n(actual time=0.595..2.150 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on sysstrings \n(cost=0.00..8.27 rows=1 width=182) (actual time=0.110..0.112 rows=1 loops=1)\n Index Cond: (id = 'net/Console/Employee/Day End \nTime'::text)\n -> Index Scan using sales_tranzdate_index on sales s \n(cost=0.01..1846.40 rows=22477 width=12) (actual time=0.454..1.687 \nrows=225 loops=1)\n Index Cond: ((s.tranzdate >= ('2010-02-15'::date + \n(sysstrings.data)::time without time zone)) AND (s.tranzdate < \n('2010-02-16'::date + (sysstrings.data)::time without time zone)))\n Filter: ((NOT void) AND (NOT suspended))\n -> Index Scan using salesitems_pkey on salesitems si \n(cost=0.00..18.54 rows=25 width=39) (actual time=0.007..0.013 rows=4 \nloops=225)\n Index Cond: (si.id = s.id)\n Filter: (((group1_id)::text <> ''::text) AND (group1_id \nIS NOT NULL) AND (NOT void))\n Total runtime: 9.585 ms\n(12 rows)\n\npostream=> SET enable_seqscan = true;\nSET\npostream=> EXPLAIN ANALYZE\npostream-> SELECT si.group1_id as name, sum(si.qty) as count, \nsum(si.amt) as amt\npostream-> FROM salesitems si, sales s, sysstrings\npostream-> WHERE si.id = s.id\npostream-> AND si.group1_id != ''\npostream-> AND si.group1_id IS NOT NULL\npostream-> AND NOT si.void\npostream-> AND NOT s.void\npostream-> AND NOT s.suspended\npostream-> AND s.tranzdate >= (cast('2010-02-15' as date) + \ncast(sysstrings.data as time))\npostream-> AND s.tranzdate < ((cast('2010-02-15' as date) + 1) + \ncast(sysstrings.data as time))\npostream-> AND sysstrings.id='net/Console/Employee/Day End Time'\npostream-> GROUP BY name;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=38315.09..38315.30 rows=14 width=35) (actual \ntime=2206.531..2206.545 rows=12 loops=1)\n -> Hash Join (cost=2697.55..37586.74 rows=97113 width=35) (actual \ntime=2128.070..2204.048 rows=894 loops=1)\n Hash Cond: (si.id = s.id)\n -> Seq Scan on salesitems si (cost=0.00..30578.15 \nrows=890646 width=39) (actual time=0.047..1487.688 rows=901281 loops=1)\n Filter: (((group1_id)::text <> ''::text) AND (group1_id \nIS NOT NULL) AND (NOT void))\n -> Hash (cost=2416.59..2416.59 rows=22477 width=4) (actual \ntime=1.823..1.823 rows=225 loops=1)\n -> Nested Loop (cost=0.01..2416.59 rows=22477 width=4) \n(actual time=0.477..1.592 rows=225 loops=1)\n -> Index Scan using sysstrings_pkey on \nsysstrings (cost=0.00..8.27 rows=1 width=182) (actual time=0.039..0.040 \nrows=1 loops=1)\n Index Cond: (id = 'net/Console/Employee/Day \nEnd Time'::text)\n -> Index Scan using sales_tranzdate_index on \nsales s (cost=0.01..1846.40 rows=22477 width=12) (actual \ntime=0.410..1.187 rows=225 loops=1)\n Index Cond: ((s.tranzdate >= \n('2010-02-15'::date + (sysstrings.data)::time without time zone)) AND \n(s.tranzdate < ('2010-02-16'::date + (sysstrings.data)::time without \ntime zone)))\n Filter: ((NOT void) AND (NOT suspended))\n Total runtime: 2206.706 ms\n(13 rows)\n\npostream=> \\d salesitems;\n Table \"public.salesitems\"\n Column | Type | Modifiers\n--------------+--------------------------+------------------------\n id | integer | not null\n lineno | smallint | not null\n plu | character varying(35) |\n qty | numeric(8,3) | not null\n amt | numeric(10,2) |\n last_updated | timestamp with time zone | default now()\n group1_id | character varying(64) |\n group2_id | text |\n group3_id | text |\n void | boolean | not null default false\n hash | boolean | not null default false\n component | boolean | not null default false\n subitem | boolean | not null default false\nIndexes:\n \"salesitems_pkey\" PRIMARY KEY, btree (id, lineno)\n \"idx_si_group_id\" btree (group1_id)\n \"salesitems_last_updated_index\" btree (last_updated)\n\n-- \nChristian Brink\n\n\n",
"msg_date": "Mon, 22 Mar 2010 15:09:11 -0400",
"msg_from": "Christian Brink <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL upgraded to 8.2 but forcing index scan on query produces\n\tfaster"
},
{
"msg_contents": "Christian Brink <[email protected]> writes:\n> -> Nested Loop (cost=0.01..2416.59 rows=22477 width=4) \n> (actual time=0.595..2.150 rows=225 loops=1)\n> -> Index Scan using sysstrings_pkey on sysstrings \n> (cost=0.00..8.27 rows=1 width=182) (actual time=0.110..0.112 rows=1 loops=1)\n> Index Cond: (id = 'net/Console/Employee/Day End \n> Time'::text)\n> -> Index Scan using sales_tranzdate_index on sales s \n> (cost=0.01..1846.40 rows=22477 width=12) (actual time=0.454..1.687 \n> rows=225 loops=1)\n> Index Cond: ((s.tranzdate >= ('2010-02-15'::date + \n> (sysstrings.data)::time without time zone)) AND (s.tranzdate < \n> ('2010-02-16'::date + (sysstrings.data)::time without time zone)))\n> Filter: ((NOT void) AND (NOT suspended))\n\nThe fundamental reason why you're getting a bad plan choice is the\nfactor-of-100 estimation error here. I'm not sure you can do a whole\nlot about that without rethinking the query --- in particular I would\nsuggest trying to get rid of the non-constant range bounds. You're\napparently already plugging in an external variable for the date,\nso maybe you could handle the time of day similarly instead of joining\nto sysstrings for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 22 Mar 2010 15:21:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL upgraded to 8.2 but forcing index scan on query\n\tproduces faster"
},
{
"msg_contents": "Not to beat a dead horse excessively, but I think the below is a pretty\ngood argument for index hints? I know the general optimizer wants to be\nhighest priority (I very much agree with this), but I think there are\nfully legitimate cases like the below. Asking the user to rewrite the\nquery in an unnatural way (or to change optimizer params that may work\nfor 99% of queries) is, IMO not a good thing. Given that the postgres\noptimizer can never be perfect (as it will never have the perfect\nknowledge necessary for a perfect decision), I would request that index\nhints be reconsidered (for 9.0?). I know many users (myself included)\nare doing this in a very rudimentary way by disabling particular access\ntypes on a per session basis \"set enable_seqscan=off; set\nenable_hashjoin=off; QUERY set enable_seqscan=on; set\nenable_hashjoin=on;\"... I'd hack up a patch if I had the time at least\n=)\n\nBest regards, Patrick\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent: Monday, March 22, 2010 12:22 PM\nTo: Christian Brink\nCc: [email protected]\nSubject: Re: [PERFORM] PostgreSQL upgraded to 8.2 but forcing index scan\non query produces faster \n\nChristian Brink <[email protected]> writes:\n> -> Nested Loop (cost=0.01..2416.59 rows=22477 width=4) \n> (actual time=0.595..2.150 rows=225 loops=1)\n> -> Index Scan using sysstrings_pkey on sysstrings \n> (cost=0.00..8.27 rows=1 width=182) (actual time=0.110..0.112 rows=1\nloops=1)\n> Index Cond: (id = 'net/Console/Employee/Day End \n> Time'::text)\n> -> Index Scan using sales_tranzdate_index on sales s\n\n> (cost=0.01..1846.40 rows=22477 width=12) (actual time=0.454..1.687 \n> rows=225 loops=1)\n> Index Cond: ((s.tranzdate >= ('2010-02-15'::date\n+ \n> (sysstrings.data)::time without time zone)) AND (s.tranzdate < \n> ('2010-02-16'::date + (sysstrings.data)::time without time zone)))\n> Filter: ((NOT void) AND (NOT suspended))\n\nThe fundamental reason why you're getting a bad plan choice is the\nfactor-of-100 estimation error here. I'm not sure you can do a whole\nlot about that without rethinking the query --- in particular I would\nsuggest trying to get rid of the non-constant range bounds. You're\napparently already plugging in an external variable for the date,\nso maybe you could handle the time of day similarly instead of joining\nto sysstrings for it.\n\n\t\t\tregards, tom lane\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Mar 2010 16:12:07 -0700",
"msg_from": "\"Eger, Patrick\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL upgraded to 8.2 but forcing index scan on query\n\tproduces faster"
},
{
"msg_contents": "On 03/22/2010 03:21 PM, Tom Lane wrote:\n> The fundamental reason why you're getting a bad plan choice is the\n> factor-of-100 estimation error here. I'm not sure you can do a whole\n> lot about that without rethinking the query --- in particular I would\n> suggest trying to get rid of the non-constant range bounds. You're\n> apparently already plugging in an external variable for the date,\n> so maybe you could handle the time of day similarly instead of joining\n> to sysstrings for it.\n>\n> \n\nTom & Peter,\n\nI thought you might like to know the outcome of this. I was able to get \nthe 8.0 and the 8.2 planner to correctly run the query. There were 2 \nissues. As Tom pointed out the the 'systrings' lookup seems to be the \nmain culprit. Which makes sense. How can the planner know how to run the \nquery when it doesn't know approximately what it will bracket the until \nthe query has started?\n\nThe other part of the solution is bit concerning. I had to do a 'dump \nand load' (and vacuum analyze) to get the planner to work correctly \neven after I rewrote the query. FYI I had run 'VACUUM ANALYZE' (and \nsometimes 'REINDEX TABLE x') between each test.\n\n\n-- \nChristian Brink\n\n\n",
"msg_date": "Wed, 24 Mar 2010 12:06:35 -0400",
"msg_from": "Christian Brink <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL upgraded to 8.2 but forcing index scan on\n\tquery produces faster"
}
] |
[
{
"msg_contents": "Here we go again!\n\nBased on recommendations made here, I got my client to migrate off of our \nWindows 2003 Server x64 box to a new Linux box.\n\n# CENTOS 5.4\n# Linux mdx_octo 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 \nx86_64 x86_64 GNU/Linux\n# pgsql 8.3.10, 8 CPUs, 48GB RAM\n# RAID 10, 4 Disks\n\nBelow are the config values of this production server (those not listed are \nthose stubbed out) . Sadly, in an attempt to improve the server's \nperformance, someone wiped out all of the changes I had made to date, along \nwith comments indicating previous values, reason for the change, etc.\n\nThis is a data warehouse production server, used for ETL. 500 GB database, \napprox 8000 tables and growing, although the vast majority of them are the \noriginal import resource tables and are rarely accessed. The actual core \ndata is about 200 tables, consisting of millions of rows. Data importing and \ncontent management is done via a 15,000 line TCL import scripts and \napplication base (as this is ETL with fuzzy logic, not just COPY... FROM...) \n.\n\nSo, we have the hardware, we have the O/S - but I think our config leaves \nmuch to be desired. Typically, our planner makes nad decisions, picking seq \nscan over index scan, where index scan has a better result.\n\nCan anyone see any obvious faults?\n\nCarlo\n\nautovacuum = on\nautovacuum_analyze_scale_factor = 0.05\nautovacuum_analyze_threshold = 1000\nautovacuum_naptime = 1min\nautovacuum_vacuum_cost_delay = 50\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_vacuum_threshold = 1000\nbgwriter_lru_maxpages = 100\ncheckpoint_segments = 128\ncheckpoint_warning = 290s\nclient_min_messages = debug1\ndatestyle = 'iso, mdy'\ndefault_statistics_target = 250\ndefault_text_search_config = 'pg_catalog.english'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nlisten_addresses = '*'\nlog_destination = 'stderr'\nlog_error_verbosity = verbose\nlog_line_prefix = '%t '\nlog_min_error_statement = debug1\nlog_min_messages = debug1\nlogging_collector = on\nmaintenance_work_mem = 256MB\nmax_connections = 100\nmax_fsm_relations = 1000\nmax_locks_per_transaction = 128\nport = 5432\nshared_buffers = 4096MB\nshared_preload_libraries = '$libdir/plugins/plugin_debugger.so'\ntrack_counts = on\nvacuum_cost_delay = 5\nwal_buffers = 4MB\nwal_sync_method = open_sync\nwork_mem = 64MB \n\n",
"msg_date": "Mon, 22 Mar 2010 18:36:03 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Got that new server, now it's time for config!"
},
{
"msg_contents": "On 3/22/10 4:36 PM, Carlo Stonebanks wrote:\n> Here we go again!\n>\n> Can anyone see any obvious faults?\n>\n> Carlo\n>\n> maintenance_work_mem = 256MB\nI'm not sure how large your individual tables are, but you might want to \nbump this value up to get faster vacuums.\n> max_fsm_relations = 1000\nI think this will definitely need to be increased\n> work_mem = 64MB\nMost data warehousing loads I can think of will need more work_mem, but \nthis depends on how large of data sets you are planning to sort.\n\n",
"msg_date": "Mon, 22 Mar 2010 17:16:31 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Got that new server, now it's time for config!"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> So, we have the hardware, we have the O/S - but I think our config \n> leaves much to be desired. Typically, our planner makes nad decisions, \n> picking seq scan over index scan, where index scan has a better result.\n>\n\nYou're not setting effective_cache_size, so I wouldn't expect it to ever \nchoose an index scan given the size of your data set. The planner \nthinks that anything bigger than 128MB isn't likely to fit in RAM by \ndefault, which favors sequential scans. That parameter should probably \nbe >24GB on your server, so it's off by more than two orders of magnitude.\n\n> wal_sync_method = open_sync\n\nThis is a scary setting to be playing with on Linux when using ext3 \nfilesystems due to general kernel bugginess in this area. See \nhttp://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php for an \nexample. I wouldn't change this from the default in your position if \nusing that filesystem.\n\nI'd drastically increase effective_cache_size, put wal_sync_method back \nto the default, and then see how things go for a bit before tweaking \nanything else. Nothing else jumped out as bad in your configuration \nbesides the extremely high logging levels, haven't looked at it that \ncarefully yet though.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 23 Mar 2010 00:12:09 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Got that new server, now it's time for config!"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 12:12 AM, Greg Smith <[email protected]> wrote:\n\n> Carlo Stonebanks wrote:\n>\n>> So, we have the hardware, we have the O/S - but I think our config leaves\n>> much to be desired. Typically, our planner makes nad decisions, picking seq\n>> scan over index scan, where index scan has a better result.\n>>\n>>\n> You're not setting effective_cache_size, so I wouldn't expect it to ever\n> choose an index scan given the size of your data set. The planner thinks\n> that anything bigger than 128MB isn't likely to fit in RAM by default, which\n> favors sequential scans. That parameter should probably be >24GB on your\n> server, so it's off by more than two orders of magnitude.\n\n\n+1\n\n\nI'm curious why you've set:\nlog_min_error_statement = debug1\nlog_min_messages = debug1\nclient_min_messages = debug1\n\nAlthough not directly addressing the problem of using index scans, this is\ngoing to be causing lots of message verbosity, possibly (based on your rate)\nenough to clobber the disks more than you need to.\n\n-Scott M\n\n\n\n\n>\n>\n> wal_sync_method = open_sync\n>>\n>\n> This is a scary setting to be playing with on Linux when using ext3\n> filesystems due to general kernel bugginess in this area. See\n> http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php for an\n> example. I wouldn't change this from the default in your position if using\n> that filesystem.\n>\n> I'd drastically increase effective_cache_size, put wal_sync_method back to\n> the default, and then see how things go for a bit before tweaking anything\n> else. Nothing else jumped out as bad in your configuration besides the\n> extremely high logging levels, haven't looked at it that carefully yet\n> though.\n>\n> --\n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Tue, Mar 23, 2010 at 12:12 AM, Greg Smith <[email protected]> wrote:\nCarlo Stonebanks wrote:\n\nSo, we have the hardware, we have the O/S - but I think our config leaves much to be desired. Typically, our planner makes nad decisions, picking seq scan over index scan, where index scan has a better result.\n\n\n\nYou're not setting effective_cache_size, so I wouldn't expect it to ever choose an index scan given the size of your data set. The planner thinks that anything bigger than 128MB isn't likely to fit in RAM by default, which favors sequential scans. That parameter should probably be >24GB on your server, so it's off by more than two orders of magnitude.\n+1I'm curious why you've set:log_min_error_statement = debug1\nlog_min_messages = debug1client_min_messages = debug1\nAlthough not directly addressing the problem of using index scans, this is going to be causing lots of message verbosity, possibly (based on your rate) enough to clobber the disks more than you need to.\n-Scott M\n\n \n\n\nwal_sync_method = open_sync\n\n\nThis is a scary setting to be playing with on Linux when using ext3 filesystems due to general kernel bugginess in this area. See http://archives.postgresql.org/pgsql-hackers/2007-10/msg01310.php for an example. I wouldn't change this from the default in your position if using that filesystem.\n\nI'd drastically increase effective_cache_size, put wal_sync_method back to the default, and then see how things go for a bit before tweaking anything else. Nothing else jumped out as bad in your configuration besides the extremely high logging levels, haven't looked at it that carefully yet though.\n\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 23 Mar 2010 09:08:17 -0400",
"msg_from": "Scott Mead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Got that new server, now it's time for config!"
}
] |
[
{
"msg_contents": "Hi All,\n\n \n\nWe have a postgres database in which couple of tables get bloated due to\nheavy inserts and deletes. Auto vacuum is running. My question is how\ncan I make auto vacuum more aggressive? I am thinking of enabling\nautovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit\nparameters. Can anyone suggest how to calculate the appropriate values\nfor these parameters and if there are any side effects of enabling these\nparameters. Any help will be highly appreciated.\n\n \n\nThanks\n\nParamjeet Kaur\n\n\n\n\n\n\n\n\n\n\n\nHi All,\n \nWe have a postgres database in which couple of tables get\nbloated due to heavy inserts and deletes. Auto vacuum is running. My question\nis how can I make auto vacuum more aggressive? I am thinking of enabling autovacuum_vacuum_cost_delay\nand autovacuum_vacuum_cost_limit parameters. Can anyone suggest how to calculate\nthe appropriate values for these parameters and if there are any side effects\nof enabling these parameters. Any help will be highly\nappreciated.\n \nThanks\nParamjeet Kaur",
"msg_date": "Tue, 23 Mar 2010 16:54:37 -0400",
"msg_from": "\"Bhella Paramjeet-PFCW67\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuning auto vacuum for highly active tables"
},
{
"msg_contents": "we ran into the same problem, had big table, played with vacuum cost and\ndelay, but can't shrink too much because of heavy insert and delete.\nwe ended up with using slony for upgrade, also have data copy from fresh\nbecause of inital replication to shrink our large table, with minimum\ncontrolled downtime.\n\nBhella Paramjeet-PFCW67 wrote:\n>\n> Hi All,\n>\n> \n>\n> We have a postgres database in which couple of tables get bloated due\n> to heavy inserts and deletes. Auto vacuum is running. My question is\n> how can I make auto vacuum more aggressive? I am thinking of enabling\n> autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit\n> parameters. Can anyone suggest how to calculate the appropriate values\n> for these parameters and if there are any side effects of enabling\n> these parameters. Any help will be highly appreciated.\n>\n> \n>\n> Thanks\n>\n> Paramjeet Kaur\n>\n\n",
"msg_date": "Tue, 23 Mar 2010 17:35:45 -0400",
"msg_from": "Szu-Ching Peckner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
},
{
"msg_contents": "we ran into the same problem, had big table, played with vacuum cost and\ndelay, but can't shrink too much because of heavy insert and delete.\nwe ended up with using slony for upgrade, also have data copy from fresh\nbecause of inital replication to shrink our large table, with minimum\ncontrolled downtime.\n\n\n\nBhella Paramjeet-PFCW67 wrote:\n>\n> Hi All,\n>\n> \n>\n> We have a postgres database in which couple of tables get bloated due\n> to heavy inserts and deletes. Auto vacuum is running. My question is\n> how can I make auto vacuum more aggressive? I am thinking of enabling\n> autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit\n> parameters. Can anyone suggest how to calculate the appropriate values\n> for these parameters and if there are any side effects of enabling\n> these parameters. Any help will be highly appreciated.\n>\n> \n>\n> Thanks\n>\n> Paramjeet Kaur\n>\n\n",
"msg_date": "Tue, 23 Mar 2010 17:37:56 -0400",
"msg_from": "Szu-Ching Peckner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67\n<[email protected]> wrote:\n> Hi All,\n>\n>\n>\n> We have a postgres database in which couple of tables get bloated due to\n> heavy inserts and deletes. Auto vacuum is running. My question is how can I\n> make auto vacuum more aggressive? I am thinking of enabling\n> autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.\n> Can anyone suggest how to calculate the appropriate values for these\n> parameters and if there are any side effects of enabling these parameters.\n> Any help will be highly appreciated.\n\nOK, autovacuum runs x number of threads, and these threads can have\ntheir IO impact limited by cost delay and cost limit.\n\nYour first choice is based a lot on your db needs. If you have a lot\nof large tables that all need to be vacuumed a lot, then you might\nwant to first increase the number of threads before making any of them\nmore aggressive. Then you might want to make the vacuums more\naggressive by lower cost_delay down from 20 to 10 or 5 or so\nmilliseconds.\n\nOn our servers we run 6 threads with a cost_delay of 3 or 4\nmilliseconds, and autovacuum keeps up without getting in the way. We\nhave a decent DAS array, so we can handle a lot of vacuum threads\nrunning at once before they become an issue.\n\nThe smaller your disk set, the less you can throw vacuum at it and not\nexpect it to mess up the rest of the app. It's all a trade off, but\nif you don't have more than a disk or two to throw at your db don't\nexpect vacuum to keep up with really heavy activity without impacting\nyour system's performance.\n",
"msg_date": "Tue, 23 Mar 2010 15:41:48 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
},
{
"msg_contents": "Hi Scott,\n\nThanks for replying. \nCan you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0. \n\nThanks\nParamjeet Kaur\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, March 23, 2010 2:42 PM\nTo: Bhella Paramjeet-PFCW67\nCc: [email protected]; [email protected]\nSubject: Re: [ADMIN] tuning auto vacuum for highly active tables\n\nOn Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67\n<[email protected]> wrote:\n> Hi All,\n>\n>\n>\n> We have a postgres database in which couple of tables get bloated due to\n> heavy inserts and deletes. Auto vacuum is running. My question is how can I\n> make auto vacuum more aggressive? I am thinking of enabling\n> autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.\n> Can anyone suggest how to calculate the appropriate values for these\n> parameters and if there are any side effects of enabling these parameters.\n> Any help will be highly appreciated.\n\nOK, autovacuum runs x number of threads, and these threads can have\ntheir IO impact limited by cost delay and cost limit.\n\nYour first choice is based a lot on your db needs. If you have a lot\nof large tables that all need to be vacuumed a lot, then you might\nwant to first increase the number of threads before making any of them\nmore aggressive. Then you might want to make the vacuums more\naggressive by lower cost_delay down from 20 to 10 or 5 or so\nmilliseconds.\n\nOn our servers we run 6 threads with a cost_delay of 3 or 4\nmilliseconds, and autovacuum keeps up without getting in the way. We\nhave a decent DAS array, so we can handle a lot of vacuum threads\nrunning at once before they become an issue.\n\nThe smaller your disk set, the less you can throw vacuum at it and not\nexpect it to mess up the rest of the app. It's all a trade off, but\nif you don't have more than a disk or two to throw at your db don't\nexpect vacuum to keep up with really heavy activity without impacting\nyour system's performance.\n",
"msg_date": "Tue, 23 Mar 2010 19:28:03 -0400",
"msg_from": "\"Bhella Paramjeet-PFCW67\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 5:28 PM, Bhella Paramjeet-PFCW67\n<[email protected]> wrote:\n> Hi Scott,\n>\n> Thanks for replying.\n> Can you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0.\n\nSure, if you psql into your db and do:\n\nshow autovac\n\nand hit tab a couple times you'll see a list of all these\nconfiguration options. The one we're looking for is\nautovacuum_max_workers. Setting this to something higher will allow\nthat many threads to run at once. While 6 or 8 threads at 5 or 10\nmilliseconds delay is ok on a bigger RAID array, it'll kill the perf\nof a machine with a pair of disks in a RAID-1. As you drop the\ncost_delay, you can no longer run as many threads without starving\nyour machine of IO. It's a good idea to keep track of how many vacuum\nthreads you're usually running and how long they run for\n(pg_stat_activity can shed some light there).\n\nWhat you're trying to do is get enough threads running so any large\ntables that take a long time (half hour or more maybe) to vacuum don't\nget in the way of all the other tables getting vacuumed too. If\nyou've got 2 really big tables and the rest aren't so big, then three\nthreads is likely plenty. If you've got 40 really big tables that\ntake a long time to get vacuumed, then you might need more than just 3\nthreads.\n\nUse iostat -x 10 /dev/sdx\n\nto monitor the db arrays to see how much IO you're utilizing with x\nautovac threads running, then increase it and keep an eye on it, all\nwhile running under a fairly steady production load. If going from 3\nto 4 threads makes the average utilization go up by 5% then you have\nan idea how much each thread is costing you.\n\nChanging cost_delay OR cost_limit will directly affect how much IO is\ngetting thrown to autovacuum. Lower values of cost_delay and higher\nvalues of cost_limit will cost you more in terms of autovacuum daemon\nusing up IO. You really don't want it using up too much of your IO.\nI shoot for something in the 10% range max, maybe a bit more.\n\nThen you want to keep an eye on table bloat to see if vacuum is\nkeeping up. If it's falling behind vacuum verbose as superuser will\ngive you an idea.\n\n\n>\n> Thanks\n> Paramjeet Kaur\n>\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: Tuesday, March 23, 2010 2:42 PM\n> To: Bhella Paramjeet-PFCW67\n> Cc: [email protected]; [email protected]\n> Subject: Re: [ADMIN] tuning auto vacuum for highly active tables\n>\n> On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67\n> <[email protected]> wrote:\n>> Hi All,\n>>\n>>\n>>\n>> We have a postgres database in which couple of tables get bloated due to\n>> heavy inserts and deletes. Auto vacuum is running. My question is how can I\n>> make auto vacuum more aggressive? I am thinking of enabling\n>> autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit parameters.\n>> Can anyone suggest how to calculate the appropriate values for these\n>> parameters and if there are any side effects of enabling these parameters.\n>> Any help will be highly appreciated.\n>\n> OK, autovacuum runs x number of threads, and these threads can have\n> their IO impact limited by cost delay and cost limit.\n>\n> Your first choice is based a lot on your db needs. If you have a lot\n> of large tables that all need to be vacuumed a lot, then you might\n> want to first increase the number of threads before making any of them\n> more aggressive. Then you might want to make the vacuums more\n> aggressive by lower cost_delay down from 20 to 10 or 5 or so\n> milliseconds.\n>\n> On our servers we run 6 threads with a cost_delay of 3 or 4\n> milliseconds, and autovacuum keeps up without getting in the way. We\n> have a decent DAS array, so we can handle a lot of vacuum threads\n> running at once before they become an issue.\n>\n> The smaller your disk set, the less you can throw vacuum at it and not\n> expect it to mess up the rest of the app. It's all a trade off, but\n> if you don't have more than a disk or two to throw at your db don't\n> expect vacuum to keep up with really heavy activity without impacting\n> your system's performance.\n>\n\n\n\n-- \nWhen fascism comes to America, it will be intolerance sold as diversity.\n",
"msg_date": "Tue, 23 Mar 2010 17:59:10 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
},
{
"msg_contents": "Scott Marlowe escribi�:\n> On Tue, Mar 23, 2010 at 5:28 PM, Bhella Paramjeet-PFCW67\n> <[email protected]> wrote:\n> > Hi Scott,\n> >\n> > Thanks for replying.\n> > Can you explain what you mean by increase the number of threads or how I can increase the number of threads? I just have 2 tables that are very active. I am using postgres version 8.2.7 and 3510 storagetek array with 10 disks in raid 1+0.\n> \n> Sure, if you psql into your db and do:\n> \n> show autovac\n> \n> and hit tab a couple times you'll see a list of all these\n> configuration options. The one we're looking for is\n> autovacuum_max_workers. Setting this to something higher will allow\n> that many threads to run at once. While 6 or 8 threads at 5 or 10\n> milliseconds delay is ok on a bigger RAID array, it'll kill the perf\n> of a machine with a pair of disks in a RAID-1. As you drop the\n> cost_delay, you can no longer run as many threads without starving\n> your machine of IO. It's a good idea to keep track of how many vacuum\n> threads you're usually running and how long they run for\n> (pg_stat_activity can shed some light there).\n\nHmm, keep in mind that having more workers means that each one of them\nincrements its cost_delay so that the total is roughly what you\nconfigured.\n\nAlso, keep in mind that max_workers is a new setting in 8.3. Since the\nOP is running 8.2, he can only get one \"worker\". Presumable he needs to\ndisable autovac for those two very active tables and setup a cron job to\nprocess them in their own schedule.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 24 Mar 2010 00:20:23 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
},
{
"msg_contents": "Alvaro Herrera escribi�:\n\n> Also, keep in mind that max_workers is a new setting in 8.3. Since the\n> OP is running 8.2, he can only get one \"worker\". Presumable he needs to\n> disable autovac for those two very active tables and setup a cron job to\n> process them in their own schedule.\n\nErr, sorry, \"she\", not \"he\".\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 24 Mar 2010 00:22:57 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning auto vacuum for highly active tables"
}
] |
[
{
"msg_contents": "I would greatly appreciate any advice anyone could give me in terms of\nperformance tuning a large PL/PGSQL stored procedure. First, I should point\nout that I have read a considerable amount of information in the online\nPostgreSQL documentation and on Google about optimizing SQL queries and\nPostgreSQL. I am looking for any additional insights that my research may\nhave overlooked. So, let me explain a little about how this stored procedure\nis constructed.\n\nThe stored procedure is written in PL/PGSQL and is 3,000+ lines long. It\nworks with around 60 tables and a dozen or so complex types that are defined\nin an additional 2,000 lines of SQL.\n\nThe procedure takes individual arguments of various types as input\nparameters and returns a single row result of a complex type.\n\nThe complex type contains roughly 25 fields, mostly text, plus another 10\nREFCURSORs.\n\nThe application that calls the stored procedure was also written by me in\nC++ and uses asynchronous libpq API commands to execute a single SQL\ntransaction which calls the stored procedure and also performs a FETCH ALL\non all open cursors. It then returns all results into various structures.\nAll rows of all cursors that are open are always used for every call to the\nstored procedure.\n\nThe stored procedure implements various logic which determines which tables\nin the database to query and how to filter the results from those queries to\nreturn only the relevant information needed by the C++ application.\n\nCurrently, in terms of optimization, I have taken the following approaches\nbased on the following reasoning:\n\n1. For all queries whose results need to return to the C++ application, I\nutilize cursors so that all results can be readied and generated by the\nstored procedure with just one call to the PostgreSQL backend. I accomplish\nthis using asynchronous libpq API calls to issue a single transaction to the\nserver. The first command after the BEGIN is a SELECT * FROM\nMyStoredProc(blah), which is then followed by FETCH ALL commands for each\ncursor that the stored procedure leaves open. I then follow up with multiple\nAPI calls to return the results and retrieve the rows from those results.\nThis minimizes the amount of back-and-forth between my C++ application and\nthe database backend.\n\n1a. Incidentally, I am also using cursors for most queries inside the stored\nprocedure that do not return results to the C++ application. I am unsure\nwhether this incurs a performance penalty compared to doing, for example, a\nSELECT ... INTO (var1, var2, ...) within the body of the stored procedure.\nInstead of using SELECT ... INTO, I am using OPEN cursor_name; FETCH\ncursor_name INTO (var1, var2).\n\n2. I have built indexes on all columns that are used in where clauses and\njoins.\n\n3. I use lots of joins to pull in data from various tables (there are around\n60 tables that are queried with each call to the stored procedure).\n\n4. When performing joins, the first table listed is the one that returns the\nmost context-specific results, which always also means that it has the\nmost-specific and fewest number of relevant rows. I then join them in order\nof least number of result rows with all inner joins preceding left outer\njoins.\n\n5. Instead of using UNIONs and EXCEPT clauses, I use multiple WITH clauses\nto define several different query-specific views. I order them such that I\ncan join additional tables in later WITH clauses to the views created\npreviously in a way that minimizes the number of rows involved in the JOIN\noperations while still providing provably accurate result sets. The EXCEPT\nclauses are then replaced by also defining one view which contains a set of\nIDs that I want filtered from the final result set and using a WHERE id NOT\nIN (SELECT id FROM filtered_view). Typically, this approach leaves me with\njust one UNION of two previously defined views (the union is required\nbecause it is returning results from different tables with different\ncolumns), which is then aliased and joined to additional tables. This allows\nall of the the JOINS and the sole remaining UNION to be applied just once\neach in calculation of the final result set. As an example, two of the\nqueries I replaced with this approach utilized four UNIONs followed by two\nEXCEPT clauses, and each of those utilized as many as 8 JOINs in building\ntheir result sets. In one case the query dropped from 173 \"explain analyze\"\nlines to 71 \"explain analyze\" lines and dropped from 1.2ms execution time to\n0.49ms execution time. The other query started at 136 \"explain analyze\"\nlines and dropped to 66 \"explain analyze\" lines. It's execution time dropped\nfrom 1.6ms to 0.66ms. This is due to the fact that each WITH clause (and the\nJOINS/UNIONS contained in them) are executed just once for each query and\ncan be used multiple times later. In addition, filters can be applied to the\nindividual result sets for each WITH clause which reduces the number of rows\nbeing worked on during later JOIN and filtering operations.\n\n6. I specify individual columns that are returned for nearly every query\nutilized in the stored procedure.\n\n7. When I have a query I need to execute whose results will be used in\nseveral other queries, I currently open the cursor for that query using the\nFOR ... LOOP construct to retrieve all records in the result set and build a\nresult array using the array_append() method. I then do an unnest(my_array)\nAS blah inside the other queries where I need to use the results so that\nthey do not need to be re-computed for each query. I am unsure about how\nefficient this method is, and I was wondering if there is some way to create\na view inside a stored procedure that could be used instead. In each of the\ncases where I do this, the results from the set must be returned via an open\ncursor to my C++ application as it also requires the results from these\nparticular queries.\n\n\nSome things to note:\n\nFor most of the joins, they simply join on foreign key IDs and no additional\nfiltering criteria are used on their information. Only a handful of the\njoined tables bring in additional criteria by which to filter the result\nset.\n\nThe approach used in 7 with cursors and building a result array which is\nthen unnested has me worried in terms of performance. It seems to me there\nshould be some better way to accomplish the same thing.\n\nThe stored procedure does not perform updates or inserts, only selects.\n\n\nAnyway, if anyone has some insights into performance tweaks or new\napproaches I might try that may lead to enhanced performance, I would\ngreatly appreciate hearing about them. I am not completely dissatisfied with\nthe performance of the stored procedure, but this is going to be used in a\nvery high volume environment (hundreds or possibly even thousands of calls\nto this stored procedure every second). The more performant it is, the less\nhardware I need to deploy. It currently takes about 45ms to execute the\nquery and retrieve all of the results into the C++ application. Query\nexecution time takes up about 16ms of that 45ms. This is on a 3-year old\nCore 2 Duo, so it's not exactly top-of-the-line hardware.\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nI would greatly appreciate any advice anyone could give me in terms of performance tuning a large PL/PGSQL stored procedure. First, I should point out that I have read a considerable amount of information in the online PostgreSQL documentation and on Google about optimizing SQL queries and PostgreSQL. I am looking for any additional insights that my research may have overlooked. So, let me explain a little about how this stored procedure is constructed.\nThe stored procedure is written in PL/PGSQL and is 3,000+ lines long. It works with around 60 tables and a dozen or so complex types that are defined in an additional 2,000 lines of SQL.The procedure takes individual arguments of various types as input parameters and returns a single row result of a complex type.\nThe complex type contains roughly 25 fields, mostly text, plus another 10 REFCURSORs.The application that calls the stored procedure was also written by me in C++ and uses asynchronous libpq API commands to execute a single SQL transaction which calls the stored procedure and also performs a FETCH ALL on all open cursors. It then returns all results into various structures. All rows of all cursors that are open are always used for every call to the stored procedure.\nThe stored procedure implements various logic which determines which tables in the database to query and how to filter the results from those queries to return only the relevant information needed by the C++ application. \nCurrently, in terms of optimization, I have taken the following approaches based on the following reasoning:1. For all queries whose results need to return to the C++ application, I utilize cursors so that all results can be readied and generated by the stored procedure with just one call to the PostgreSQL backend. I accomplish this using asynchronous libpq API calls to issue a single transaction to the server. The first command after the BEGIN is a SELECT * FROM MyStoredProc(blah), which is then followed by FETCH ALL commands for each cursor that the stored procedure leaves open. I then follow up with multiple API calls to return the results and retrieve the rows from those results. This minimizes the amount of back-and-forth between my C++ application and the database backend.\n1a. Incidentally, I am also using cursors for most queries inside the stored procedure that do not return results to the C++ application. I am unsure whether this incurs a performance penalty compared to doing, for example, a SELECT ... INTO (var1, var2, ...) within the body of the stored procedure. Instead of using SELECT ... INTO, I am using OPEN cursor_name; FETCH cursor_name INTO (var1, var2). \n2. I have built indexes on all columns that are used in where clauses and joins.3. I use lots of joins to pull in data from various tables (there are around 60 tables that are queried with each call to the stored procedure).\n4. When performing joins, the first table listed is the one that returns the most context-specific results, which always also means that it has the most-specific and fewest number of relevant rows. I then join them in order of least number of result rows with all inner joins preceding left outer joins. \n5. Instead of using UNIONs and EXCEPT clauses, I use multiple WITH clauses to define several different query-specific views. I order them such that I can join additional tables in later WITH clauses to the views created previously in a way that minimizes the number of rows involved in the JOIN operations while still providing provably accurate result sets. The EXCEPT clauses are then replaced by also defining one view which contains a set of IDs that I want filtered from the final result set and using a WHERE id NOT IN (SELECT id FROM filtered_view). Typically, this approach leaves me with just one UNION of two previously defined views (the union is required because it is returning results from different tables with different columns), which is then aliased and joined to additional tables. This allows all of the the JOINS and the sole remaining UNION to be applied just once each in calculation of the final result set. As an example, two of the queries I replaced with this approach utilized four UNIONs followed by two EXCEPT clauses, and each of those utilized as many as 8 JOINs in building their result sets. In one case the query dropped from 173 \"explain analyze\" lines to 71 \"explain analyze\" lines and dropped from 1.2ms execution time to 0.49ms execution time. The other query started at 136 \"explain analyze\" lines and dropped to 66 \"explain analyze\" lines. It's execution time dropped from 1.6ms to 0.66ms. This is due to the fact that each WITH clause (and the JOINS/UNIONS contained in them) are executed just once for each query and can be used multiple times later. In addition, filters can be applied to the individual result sets for each WITH clause which reduces the number of rows being worked on during later JOIN and filtering operations.\n6. I specify individual columns that are returned for nearly every query utilized in the stored procedure.7. When I have a query I need to execute whose results will be used in several other queries, I currently open the cursor for that query using the FOR ... LOOP construct to retrieve all records in the result set and build a result array using the array_append() method. I then do an unnest(my_array) AS blah inside the other queries where I need to use the results so that they do not need to be re-computed for each query. I am unsure about how efficient this method is, and I was wondering if there is some way to create a view inside a stored procedure that could be used instead. In each of the cases where I do this, the results from the set must be returned via an open cursor to my C++ application as it also requires the results from these particular queries.\nSome things to note:For most of the joins, they simply join on foreign key IDs and no additional filtering criteria are used on their information. Only a handful of the joined tables bring in additional criteria by which to filter the result set. \nThe approach used in 7 with cursors and building a result array which is then unnested has me worried in terms of performance. It seems to me there should be some better way to accomplish the same thing. The stored procedure does not perform updates or inserts, only selects.\nAnyway, if anyone has some insights into performance tweaks or new approaches I might try that may lead to enhanced performance, I would greatly appreciate hearing about them. I am not completely dissatisfied with the performance of the stored procedure, but this is going to be used in a very high volume environment (hundreds or possibly even thousands of calls to this stored procedure every second). The more performant it is, the less hardware I need to deploy. It currently takes about 45ms to execute the query and retrieve all of the results into the C++ application. Query execution time takes up about 16ms of that 45ms. This is on a 3-year old Core 2 Duo, so it's not exactly top-of-the-line hardware. \n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero",
"msg_date": "Tue, 23 Mar 2010 17:00:23 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Tuning Large PL/PGSQL Stored Procedure"
},
{
"msg_contents": "On Tue, Mar 23, 2010 at 5:00 PM, Eliot Gable\n<[email protected]> wrote:\n> The complex type contains roughly 25 fields, mostly text, plus another 10\n> REFCURSORs.\n\nHow many rows min/max/avg are coming back in your refcursors? Are you\nusing cursors in order to return multiple complex data structures\n(sets, etc) in a single function call?\n\n> The application that calls the stored procedure was also written by me in\n> C++ and uses asynchronous libpq API commands to execute a single SQL\n> transaction which calls the stored procedure and also performs a FETCH ALL\n> on all open cursors. It then returns all results into various structures.\n> All rows of all cursors that are open are always used for every call to the\n> stored procedure.\n>\n> The stored procedure implements various logic which determines which tables\n> in the database to query and how to filter the results from those queries to\n> return only the relevant information needed by the C++ application.\n>\n> Currently, in terms of optimization, I have taken the following approaches\n> based on the following reasoning:\n>\n> 1. For all queries whose results need to return to the C++ application, I\n> utilize cursors so that all results can be readied and generated by the\n> stored procedure with just one call to the PostgreSQL backend. I accomplish\n> this using asynchronous libpq API calls to issue a single transaction to the\n> server. The first command after the BEGIN is a SELECT * FROM\n> MyStoredProc(blah), which is then followed by FETCH ALL commands for each\n> cursor that the stored procedure leaves open. I then follow up with multiple\n> API calls to return the results and retrieve the rows from those results.\n> This minimizes the amount of back-and-forth between my C++ application and\n> the database backend.\n>\n> 1a. Incidentally, I am also using cursors for most queries inside the stored\n> procedure that do not return results to the C++ application. I am unsure\n> whether this incurs a performance penalty compared to doing, for example, a\n> SELECT ... INTO (var1, var2, ...) within the body of the stored procedure.\n> Instead of using SELECT ... INTO, I am using OPEN cursor_name; FETCH\n> cursor_name INTO (var1, var2).\n>\n> 2. I have built indexes on all columns that are used in where clauses and\n> joins.\n>\n> 3. I use lots of joins to pull in data from various tables (there are around\n> 60 tables that are queried with each call to the stored procedure).\n>\n> 4. When performing joins, the first table listed is the one that returns the\n> most context-specific results, which always also means that it has the\n> most-specific and fewest number of relevant rows. I then join them in order\n> of least number of result rows with all inner joins preceding left outer\n> joins.\n>\n> 5. Instead of using UNIONs and EXCEPT clauses, I use multiple WITH clauses\n> to define several different query-specific views. I order them such that I\n> can join additional tables in later WITH clauses to the views created\n\nWITH clauses can make your queries much easier to read and yield great\nspeedups if you need to access the table expression multiple times\nfrom other parts of the query. however, in some cases you can get\ninto trouble because a standard set of joins is going to give the\nplanner the most flexibility in terms of query optimization.\n\n> previously in a way that minimizes the number of rows involved in the JOIN\n> operations while still providing provably accurate result sets. The EXCEPT\n> clauses are then replaced by also defining one view which contains a set of\n> IDs that I want filtered from the final result set and using a WHERE id NOT\n> IN (SELECT id FROM filtered_view). Typically, this approach leaves me with\n> just one UNION of two previously defined views (the union is required\n\n\nUNION is always an optimization target (did you mean UNION ALL?)\n\n> 7. When I have a query I need to execute whose results will be used in\n> several other queries, I currently open the cursor for that query using the\n> FOR ... LOOP construct to retrieve all records in the result set and build a\n> result array using the array_append() method. I then do an unnest(my_array)\n\ndo not use array_append. always do array(select ...) whenever it is\npossible. when it isn't, rethink your problem until it is possible.\nonly exception is to use array_agg aggregate if your problem really is\nan aggregation type of thing. as a matter of fact, any for...loop is\nan optimization target because a re-think will probably yield a query\nthat does the same thing without paying for the loop.\n\n>\n> For most of the joins, they simply join on foreign key IDs and no additional\n> filtering criteria are used on their information. Only a handful of the\n> joined tables bring in additional criteria by which to filter the result\n> set.\n>\n> The approach used in 7 with cursors and building a result array which is\n> then unnested has me worried in terms of performance. It seems to me there\n> should be some better way to accomplish the same thing.\n>\n> The stored procedure does not perform updates or inserts, only selects.\n>\n>\n> Anyway, if anyone has some insights into performance tweaks or new\n> approaches I might try that may lead to enhanced performance, I would\n> greatly appreciate hearing about them. I am not completely dissatisfied with\n> the performance of the stored procedure, but this is going to be used in a\n> very high volume environment (hundreds or possibly even thousands of calls\n> to this stored procedure every second). The more performant it is, the less\n> hardware I need to deploy. It currently takes about 45ms to execute the\n> query and retrieve all of the results into the C++ application. Query\n> execution time takes up about 16ms of that 45ms. This is on a 3-year old\n> Core 2 Duo, so it's not exactly top-of-the-line hardware.\n\nIf you are chasing milliseconds, using C/C++, and dealing with complex\ndata structures coming in/out of the database, I would absolutely\nadvise you to check out the libpqtypes library (disclaimer, I co-wrote\nit!) in order to speed up data transfer. The library is highly\noptimized and facilitates all transfers in binary which yields good\ngains when sending types which are expensive to hammer to text (bytea,\ntimestamp, etc).\n\nIn addition, using libpqtypes you can use arrays of composites (in\n8.3+) to send/receive complex structures (even trees, etc) and pull\nthe entire set of data in a single query. This is an alternative to\nthe refcursor/fetch method which involves extra round trips and has\nother problems (but is the way to go if you need to progressive fetch\nlarge amounts of data).\n\nAs a general tip, I suggest 'divide and conquer'. Sprinkle your\nprocedure with 'raise notice %', gettimeofday(); And record the time\nspent on the various steps of the execution. This will give better\nmeasurements then pulling pieces of the function out and running them\noutside with constants for the arguments. Identify the problem spots\nand direct your energies there.\n\nmerlin\n\nhttp://libpqtypes.esilo.com/\nhttp://pgfoundry.org/projects/libpqtypes/\n",
"msg_date": "Thu, 25 Mar 2010 22:00:15 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Tuning Large PL/PGSQL Stored Procedure"
},
{
"msg_contents": "On Thu, Mar 25, 2010 at 10:00 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Mar 23, 2010 at 5:00 PM, Eliot Gable\n> <[email protected] <egable%[email protected]>>\n> wrote:\n> > The complex type contains roughly 25 fields, mostly text, plus another 10\n> > REFCURSORs.\n>\n> How many rows min/max/avg are coming back in your refcursors? Are you\n> using cursors in order to return multiple complex data structures\n> (sets, etc) in a single function call?\n>\n>\nI think the largest number of rows is around 40. Most are substantially\nsmaller. However, most of them have about two dozen or more columns, and I\nhave already shortened the list of columns to the minimum possible. The\naverage number of rows is around 10, but the largest sets of rows also have\nthe most columns. I'm using the cursors in order to obtain multiple complex\ndata structures in a single function call.\n\n\n\n> > The application that calls the stored procedure was also written by me in\n> > C++ and uses asynchronous libpq API commands to execute a single SQL\n> > transaction which calls the stored procedure and also performs a FETCH\n> ALL\n> > on all open cursors. It then returns all results into various structures.\n> > All rows of all cursors that are open are always used for every call to\n> the\n> > stored procedure.\n> >\n> > The stored procedure implements various logic which determines which\n> tables\n> > in the database to query and how to filter the results from those queries\n> to\n> > return only the relevant information needed by the C++ application.\n> >\n> > Currently, in terms of optimization, I have taken the following\n> approaches\n> > based on the following reasoning:\n> >\n> > 1. For all queries whose results need to return to the C++ application, I\n> > utilize cursors so that all results can be readied and generated by the\n> > stored procedure with just one call to the PostgreSQL backend. I\n> accomplish\n> > this using asynchronous libpq API calls to issue a single transaction to\n> the\n> > server. The first command after the BEGIN is a SELECT * FROM\n> > MyStoredProc(blah), which is then followed by FETCH ALL commands for each\n> > cursor that the stored procedure leaves open. I then follow up with\n> multiple\n> > API calls to return the results and retrieve the rows from those results.\n> > This minimizes the amount of back-and-forth between my C++ application\n> and\n> > the database backend.\n> >\n> > 1a. Incidentally, I am also using cursors for most queries inside the\n> stored\n> > procedure that do not return results to the C++ application. I am unsure\n> > whether this incurs a performance penalty compared to doing, for example,\n> a\n> > SELECT ... INTO (var1, var2, ...) within the body of the stored\n> procedure.\n> > Instead of using SELECT ... INTO, I am using OPEN cursor_name; FETCH\n> > cursor_name INTO (var1, var2).\n> >\n> > 2. I have built indexes on all columns that are used in where clauses and\n> > joins.\n> >\n> > 3. I use lots of joins to pull in data from various tables (there are\n> around\n> > 60 tables that are queried with each call to the stored procedure).\n> >\n> > 4. When performing joins, the first table listed is the one that returns\n> the\n> > most context-specific results, which always also means that it has the\n> > most-specific and fewest number of relevant rows. I then join them in\n> order\n> > of least number of result rows with all inner joins preceding left outer\n> > joins.\n> >\n> > 5. Instead of using UNIONs and EXCEPT clauses, I use multiple WITH\n> clauses\n> > to define several different query-specific views. I order them such that\n> I\n> > can join additional tables in later WITH clauses to the views created\n>\n> WITH clauses can make your queries much easier to read and yield great\n> speedups if you need to access the table expression multiple times\n> from other parts of the query. however, in some cases you can get\n> into trouble because a standard set of joins is going to give the\n> planner the most flexibility in terms of query optimization.\n>\n>\nSo far, every case I have converted to WITH clauses has resulted in more\nthan double the speed (half the time required to perform the query). The\nmain reason appears to be from avoiding calculating JOIN conditions multiple\ntimes in different parts of the query due to the UNION and EXCEPT clauses.\n\n\n> > previously in a way that minimizes the number of rows involved in the\n> JOIN\n> > operations while still providing provably accurate result sets. The\n> EXCEPT\n> > clauses are then replaced by also defining one view which contains a set\n> of\n> > IDs that I want filtered from the final result set and using a WHERE id\n> NOT\n> > IN (SELECT id FROM filtered_view). Typically, this approach leaves me\n> with\n> > just one UNION of two previously defined views (the union is required\n>\n>\n> UNION is always an optimization target (did you mean UNION ALL?)\n>\n>\nThanks for the suggestion on UNION ALL; I indeed do not need elimination of\nduplicates, so UNION ALL is a better option.\n\n\n> > 7. When I have a query I need to execute whose results will be used in\n> > several other queries, I currently open the cursor for that query using\n> the\n> > FOR ... LOOP construct to retrieve all records in the result set and\n> build a\n> > result array using the array_append() method. I then do an\n> unnest(my_array)\n>\n> do not use array_append. always do array(select ...) whenever it is\n> possible. when it isn't, rethink your problem until it is possible.\n> only exception is to use array_agg aggregate if your problem really is\n> an aggregation type of thing. as a matter of fact, any for...loop is\n> an optimization target because a re-think will probably yield a query\n> that does the same thing without paying for the loop.\n>\n>\nI suspected it was a performance issue. I will see if I can find an\nalternative way of doing it. Based on your feedback, I think I may know how\nto do it now.\n\n\n> >\n> > For most of the joins, they simply join on foreign key IDs and no\n> additional\n> > filtering criteria are used on their information. Only a handful of the\n> > joined tables bring in additional criteria by which to filter the result\n> > set.\n> >\n> > The approach used in 7 with cursors and building a result array which is\n> > then unnested has me worried in terms of performance. It seems to me\n> there\n> > should be some better way to accomplish the same thing.\n> >\n> > The stored procedure does not perform updates or inserts, only selects.\n> >\n> >\n> > Anyway, if anyone has some insights into performance tweaks or new\n> > approaches I might try that may lead to enhanced performance, I would\n> > greatly appreciate hearing about them. I am not completely dissatisfied\n> with\n> > the performance of the stored procedure, but this is going to be used in\n> a\n> > very high volume environment (hundreds or possibly even thousands of\n> calls\n> > to this stored procedure every second). The more performant it is, the\n> less\n> > hardware I need to deploy. It currently takes about 45ms to execute the\n> > query and retrieve all of the results into the C++ application. Query\n> > execution time takes up about 16ms of that 45ms. This is on a 3-year old\n> > Core 2 Duo, so it's not exactly top-of-the-line hardware.\n>\n> If you are chasing milliseconds, using C/C++, and dealing with complex\n> data structures coming in/out of the database, I would absolutely\n> advise you to check out the libpqtypes library (disclaimer, I co-wrote\n> it!) in order to speed up data transfer. The library is highly\n> optimized and facilitates all transfers in binary which yields good\n> gains when sending types which are expensive to hammer to text (bytea,\n> timestamp, etc).\n>\n\nThe data returned from the application is just rows of strings, numbers\n(ints and doubles), and booleans. Would I see a good speedup with libpqtypes\nwhen dealing with those data types?\n\n\n> In addition, using libpqtypes you can use arrays of composites (in\n> 8.3+) to send/receive complex structures (even trees, etc) and pull\n> the entire set of data in a single query. This is an alternative to\n> the refcursor/fetch method which involves extra round trips and has\n> other problems (but is the way to go if you need to progressive fetch\n> large amounts of data).\n>\n\nSo, you are saying that I can return a complex type as a result which\ncontains arrays of other complex types and just use my single SELECT command\nto retrieve the whole data set? That would be much simpler and I imagine\nmust faster.\n\n\n> As a general tip, I suggest 'divide and conquer'. Sprinkle your\n> procedure with 'raise notice %', gettimeofday(); And record the time\n> spent on the various steps of the execution. This will give better\n> measurements then pulling pieces of the function out and running them\n> outside with constants for the arguments. Identify the problem spots\n> and direct your energies there.\n>\n\nAs a matter of fact, I have a callback messaging function set up so that my\nRAISE NOTICE commands call back to my C++ program and generate log messages.\nThe log messages show date + time + microseconds, so I can see how long it\ntakes to go through each part.\n\nI really am chasing milliseconds here, and I appreciate all your feedback.\nYou've given me a relatively large number of possible optimizations I can\ntry out. I will definitely try out the libpqtypes. That sounds like a\npromising way to further cut down on execution time. I think most of my\nperformance penalty is in transfering the results back to the C++\napplication.\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nOn Thu, Mar 25, 2010 at 10:00 PM, Merlin Moncure <[email protected]> wrote:\nOn Tue, Mar 23, 2010 at 5:00 PM, Eliot Gable\n<[email protected]> wrote:\n> The complex type contains roughly 25 fields, mostly text, plus another 10\n> REFCURSORs.\n\nHow many rows min/max/avg are coming back in your refcursors? Are you\nusing cursors in order to return multiple complex data structures\n(sets, etc) in a single function call?\nI think the largest number of rows is around 40. Most are substantially smaller. However, most of them have about two dozen or more columns, and I have already shortened the list of columns to the minimum possible. The average number of rows is around 10, but the largest sets of rows also have the most columns. I'm using the cursors in order to obtain multiple complex data structures in a single function call.\n \n> The application that calls the stored procedure was also written by me in\n> C++ and uses asynchronous libpq API commands to execute a single SQL\n> transaction which calls the stored procedure and also performs a FETCH ALL\n> on all open cursors. It then returns all results into various structures.\n> All rows of all cursors that are open are always used for every call to the\n> stored procedure.\n>\n> The stored procedure implements various logic which determines which tables\n> in the database to query and how to filter the results from those queries to\n> return only the relevant information needed by the C++ application.\n>\n> Currently, in terms of optimization, I have taken the following approaches\n> based on the following reasoning:\n>\n> 1. For all queries whose results need to return to the C++ application, I\n> utilize cursors so that all results can be readied and generated by the\n> stored procedure with just one call to the PostgreSQL backend. I accomplish\n> this using asynchronous libpq API calls to issue a single transaction to the\n> server. The first command after the BEGIN is a SELECT * FROM\n> MyStoredProc(blah), which is then followed by FETCH ALL commands for each\n> cursor that the stored procedure leaves open. I then follow up with multiple\n> API calls to return the results and retrieve the rows from those results.\n> This minimizes the amount of back-and-forth between my C++ application and\n> the database backend.\n>\n> 1a. Incidentally, I am also using cursors for most queries inside the stored\n> procedure that do not return results to the C++ application. I am unsure\n> whether this incurs a performance penalty compared to doing, for example, a\n> SELECT ... INTO (var1, var2, ...) within the body of the stored procedure.\n> Instead of using SELECT ... INTO, I am using OPEN cursor_name; FETCH\n> cursor_name INTO (var1, var2).\n>\n> 2. I have built indexes on all columns that are used in where clauses and\n> joins.\n>\n> 3. I use lots of joins to pull in data from various tables (there are around\n> 60 tables that are queried with each call to the stored procedure).\n>\n> 4. When performing joins, the first table listed is the one that returns the\n> most context-specific results, which always also means that it has the\n> most-specific and fewest number of relevant rows. I then join them in order\n> of least number of result rows with all inner joins preceding left outer\n> joins.\n>\n> 5. Instead of using UNIONs and EXCEPT clauses, I use multiple WITH clauses\n> to define several different query-specific views. I order them such that I\n> can join additional tables in later WITH clauses to the views created\n\nWITH clauses can make your queries much easier to read and yield great\nspeedups if you need to access the table expression multiple times\nfrom other parts of the query. however, in some cases you can get\ninto trouble because a standard set of joins is going to give the\nplanner the most flexibility in terms of query optimization.\nSo far, every case I have converted to WITH clauses has resulted in more than double the speed (half the time required to perform the query). The main reason appears to be from avoiding calculating JOIN conditions multiple times in different parts of the query due to the UNION and EXCEPT clauses.\n \n> previously in a way that minimizes the number of rows involved in the JOIN\n> operations while still providing provably accurate result sets. The EXCEPT\n> clauses are then replaced by also defining one view which contains a set of\n> IDs that I want filtered from the final result set and using a WHERE id NOT\n> IN (SELECT id FROM filtered_view). Typically, this approach leaves me with\n> just one UNION of two previously defined views (the union is required\n\n\nUNION is always an optimization target (did you mean UNION ALL?)\nThanks for the suggestion on UNION ALL; I indeed do not need elimination of duplicates, so UNION ALL is a better option. \n\n> 7. When I have a query I need to execute whose results will be used in\n> several other queries, I currently open the cursor for that query using the\n> FOR ... LOOP construct to retrieve all records in the result set and build a\n> result array using the array_append() method. I then do an unnest(my_array)\n\ndo not use array_append. always do array(select ...) whenever it is\npossible. when it isn't, rethink your problem until it is possible.\nonly exception is to use array_agg aggregate if your problem really is\nan aggregation type of thing. as a matter of fact, any for...loop is\nan optimization target because a re-think will probably yield a query\nthat does the same thing without paying for the loop.\nI suspected it was a performance issue. I will see if I can find an alternative way of doing it. Based on your feedback, I think I may know how to do it now. \n\n>\n> For most of the joins, they simply join on foreign key IDs and no additional\n> filtering criteria are used on their information. Only a handful of the\n> joined tables bring in additional criteria by which to filter the result\n> set.\n>\n> The approach used in 7 with cursors and building a result array which is\n> then unnested has me worried in terms of performance. It seems to me there\n> should be some better way to accomplish the same thing.\n>\n> The stored procedure does not perform updates or inserts, only selects.\n>\n>\n> Anyway, if anyone has some insights into performance tweaks or new\n> approaches I might try that may lead to enhanced performance, I would\n> greatly appreciate hearing about them. I am not completely dissatisfied with\n> the performance of the stored procedure, but this is going to be used in a\n> very high volume environment (hundreds or possibly even thousands of calls\n> to this stored procedure every second). The more performant it is, the less\n> hardware I need to deploy. It currently takes about 45ms to execute the\n> query and retrieve all of the results into the C++ application. Query\n> execution time takes up about 16ms of that 45ms. This is on a 3-year old\n> Core 2 Duo, so it's not exactly top-of-the-line hardware.\n\nIf you are chasing milliseconds, using C/C++, and dealing with complex\ndata structures coming in/out of the database, I would absolutely\nadvise you to check out the libpqtypes library (disclaimer, I co-wrote\nit!) in order to speed up data transfer. The library is highly\noptimized and facilitates all transfers in binary which yields good\ngains when sending types which are expensive to hammer to text (bytea,\ntimestamp, etc).The data returned from the application is just rows of strings, numbers (ints and doubles), and booleans. Would I see a good speedup with libpqtypes when dealing with those data types?\n \nIn addition, using libpqtypes you can use arrays of composites (in\n8.3+) to send/receive complex structures (even trees, etc) and pull\nthe entire set of data in a single query. This is an alternative to\nthe refcursor/fetch method which involves extra round trips and has\nother problems (but is the way to go if you need to progressive fetch\nlarge amounts of data).So, you are saying that I can return a complex type as a result which contains arrays of other complex types and just use my single SELECT command to retrieve the whole data set? That would be much simpler and I imagine must faster.\n \nAs a general tip, I suggest 'divide and conquer'. Sprinkle your\nprocedure with 'raise notice %', gettimeofday(); And record the time\nspent on the various steps of the execution. This will give better\nmeasurements then pulling pieces of the function out and running them\noutside with constants for the arguments. Identify the problem spots\nand direct your energies there.As a matter of fact, I have a callback messaging function set up so that my RAISE NOTICE commands call back to my C++ program and generate log messages. The log messages show date + time + microseconds, so I can see how long it takes to go through each part. \nI really am chasing milliseconds here, and I appreciate all your feedback. You've given me a relatively large number of possible optimizations I can try out. I will definitely try out the libpqtypes. That sounds like a promising way to further cut down on execution time. I think most of my performance penalty is in transfering the results back to the C++ application. \n-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero",
"msg_date": "Thu, 25 Mar 2010 23:56:35 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Tuning Large PL/PGSQL Stored Procedure"
},
{
"msg_contents": "On Thu, Mar 25, 2010 at 11:56 PM, Eliot Gable\n<[email protected]> wrote:\n>>\n>> How many rows min/max/avg are coming back in your refcursors? Are you\n>> using cursors in order to return multiple complex data structures\n>> (sets, etc) in a single function call?\n>>\n>\n> I think the largest number of rows is around 40. Most are substantially\n> smaller. However, most of them have about two dozen or more columns, and I\n> have already shortened the list of columns to the minimum possible. The\n> average number of rows is around 10, but the largest sets of rows also have\n> the most columns. I'm using the cursors in order to obtain multiple complex\n> data structures in a single function call.\n\nok, small sets. yes, passing them back to the client as arrays is\nprobably going to be faster. It's a trivial change to your proc. you\nhave to define a type for your array element the way we are going to\nuse it. you can use a composite type or a table (I prefer a table).\n\ncreate table mystuff_t\n(\n a text,\n b int,\n c timestamptz\n);\n\ncreate function myproc([...], mystuffs out mystuff_t[])\n[inside proc]\n\nreplace your cursor declaration with this:\n\nselect array\n(\n select (a,b,c)::mystuff_t from [...]\n) into mystuffs;\n\ncode an alternate version of the function and then inside libpq\nexecute the query in binary and discard the results, timing the\nresults and comparing to how you run your query now also discarding\nthe results. we want to time it this way because from timing it from\npsql includes the time to print out the array in text format which we\ncan avoid with libpqtypes (which we are not going to mess with, until\nwe know there is a resaon to go in this direction). We do need to\ninclude the time to turn around and fetch the data from the\nrefcursors. If you see at least a 10-20% improvement, it warrants\nfurther effort IMO (and say goodbye to refcursors forever).\n\n>> WITH clauses can make your queries much easier to read and yield great\n>> speedups if you need to access the table expression multiple times\n>> from other parts of the query. however, in some cases you can get\n>> into trouble because a standard set of joins is going to give the\n>> planner the most flexibility in terms of query optimization.\n>>\n>\n> So far, every case I have converted to WITH clauses has resulted in more\n> than double the speed (half the time required to perform the query). The\n> main reason appears to be from avoiding calculating JOIN conditions multiple\n> times in different parts of the query due to the UNION and EXCEPT clauses.\n\nI have a hard time believing that unless there are other factors\ncompromising the planner like bad statistics or a non optimal query or\nyou are dealing with a relatively special case.\n\n'EXCEPT' btw is also an optimization target. maybe think about\nconverting to 'letf join where rightcol is null' or something like\nthat. not 100% sure, I think some work was done recently on except so\nthis advice may not be as true as it used to be, and possibly moot if\nthe number of rows being considered by except is very small.\n\n> So, you are saying that I can return a complex type as a result which\n> contains arrays of other complex types and just use my single SELECT command\n> to retrieve the whole data set? That would be much simpler and I imagine\n> must faster.\n\nyes, however you will want to receive as few complex types as\npossible, meaning your result set should still have multiple columns.\nreducing the number of columns is not an optimization target. in\nother words, do the minimal amount of stacking necessary to allow\nsingle query extraction of data.\n\n> I really am chasing milliseconds here, and I appreciate all your feedback.\n> You've given me a relatively large number of possible optimizations I can\n> try out. I will definitely try out the libpqtypes. That sounds like a\n> promising way to further cut down on execution time. I think most of my\n> performance penalty is in transfering the results back to the C++\n> application.\n\nyes. I've suggested libpqtypes to a number of people on the lists,\nand you are what i'd consider the ideal candidate. libpqtypes will\ncompletely transform the way you think about postgresql and libpq.\ngood luck. if you need help setting it up you can email me privately\nor on the libpqtypes list.\n\nmerlin\n",
"msg_date": "Fri, 26 Mar 2010 08:06:49 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Tuning Large PL/PGSQL Stored Procedure"
},
{
"msg_contents": "On 26/03/10 03:56, Eliot Gable wrote:\n>\n> I really am chasing milliseconds here, and I appreciate all your feedback.\n> You've given me a relatively large number of possible optimizations I can\n> try out. I will definitely try out the libpqtypes. That sounds like a\n> promising way to further cut down on execution time. I think most of my\n> performance penalty is in transfering the results back to the C++\n> application.\n\nIn addition to all of Merlin's good advice, if the client is on a \ndifferent machine to the server then try sticking wireshark or similar \nonto the connection. That should make it pretty clear where the main \ncosts are in getting your data back.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 26 Mar 2010 12:18:25 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Tuning Large PL/PGSQL Stored Procedure"
}
] |
[
{
"msg_contents": "PostgreSQL 8.4.3\n\nLinux Redhat 5.0\n\n \n\nQuestion: How much memory do I really need?\n\n \n\n From my understanding there are two primary strategies for setting up\nPostgreSQL in relationship to memory:\n\n \n\n1) Rely on Linux to cache the files. In this approach you set the\nshared_buffers to a relatively low number. \n\n2) You can set shared_buffers to a very large percentage of your\nmemory so that PostgreSQL reserves the memory for the database.\n\n \n\nI am currently using option #1. I have 24 Gig of memory on my server\nand the database takes up 17 Gig of disk space. When I do the Linux\ncommand \"top\" I notice that 19 Gig is allocated for cache. Is there a\nway for me to tell how much of that cache is associated with the caching\nof database files?\n\n \n\nI am basically asking how much memory do I really need? Maybe I have\ncomplete over kill. Maybe I am getting to a point where I might need\nmore memory.\n\n \n\nMy thought was I could use option #2 and then set the number to a lower\namount. If the performance is bad then slowly work the number up.\n\n \n\nOur server manager seems to think that I have way to much memory. He\nthinks that we only need 5 Gig. I don't really believe that. But I\nwant to cover myself. With money tight I don't want to be the person\nwho is wasting resources. We need to replace our database servers so I\nwant to do the right thing.\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nSoftware Architect/DBA/Project Manager\n\nWeb Services at Public Affairs\n\n217-333-0382\n\n \n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL 8.4.3\nLinux Redhat 5.0\n \nQuestion: How much memory do I really need?\n \nFrom my understanding there are two primary strategies for\nsetting up PostgreSQL in relationship to memory:\n \n1) \nRely on Linux to cache the files. In this\napproach you set the shared_buffers to a relatively low number. \n2) \nYou can set shared_buffers to a very large percentage\nof your memory so that PostgreSQL reserves the memory for the database.\n \nI am currently using option #1. I have 24 Gig of\nmemory on my server and the database takes up 17 Gig of disk space. When\nI do the Linux command “top” I notice that 19 Gig is allocated for\ncache. Is there a way for me to tell how much of that cache is associated\nwith the caching of database files?\n \nI am basically asking how much memory do I really\nneed? Maybe I have complete over kill. Maybe I am getting to a\npoint where I might need more memory.\n \nMy thought was I could use option #2 and then set the number\nto a lower amount. If the performance is bad then slowly work the number\nup.\n \nOur server manager seems to think that I have way to much\nmemory. He thinks that we only need 5 Gig. I don’t really\nbelieve that. But I want to cover myself. With money tight I don’t\nwant to be the person who is wasting resources. We need to replace our\ndatabase servers so I want to do the right thing.\n \nThanks,\n \nLance Campbell\nSoftware Architect/DBA/Project Manager\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Wed, 24 Mar 2010 19:49:10 -0500",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "memory question"
},
{
"msg_contents": "On Wed, Mar 24, 2010 at 6:49 PM, Campbell, Lance <[email protected]> wrote:\n> PostgreSQL 8.4.3\n>\n> Linux Redhat 5.0\n>\n> Question: How much memory do I really need?\n\nThe answer is \"as much as needed to hold your entire database in\nmemory and a few gig left over for sorts and backends to play in.\"\n\n> From my understanding there are two primary strategies for setting up\n> PostgreSQL in relationship to memory:\n>\n>\n>\n> 1) Rely on Linux to cache the files. In this approach you set the\n> shared_buffers to a relatively low number.\n>\n> 2) You can set shared_buffers to a very large percentage of your memory\n> so that PostgreSQL reserves the memory for the database.\n\nThe kernel is better at caching large amounts of memory. Pg is better\nat handling somewhat smaller amounts and not flushing out random\naccess data for sequential access data.\n\n> I am currently using option #1. I have 24 Gig of memory on my server and\n> the database takes up 17 Gig of disk space. When I do the Linux command\n> “top” I notice that 19 Gig is allocated for cache. Is there a way for me to\n> tell how much of that cache is associated with the caching of database\n> files?\n\nProbably nearly all of that 19G for cache is allocated for pg files.\nNot sure how to tell off the top of my head though.\n\n> I am basically asking how much memory do I really need? Maybe I have\n> complete over kill. Maybe I am getting to a point where I might need more\n> memory.\n\nActually, there are three levels of caching that are possible. 1:\nEntire db, tables and indexes, can fit in RAM. This is the fastest\nmethod. Worth the extra $ for RAM if you can afford it / db isn't too\nhuge. 2: Indexes can fit in RAM, some of tables can. Still pretty\nfast. Definitely worth paying a little extra for. 3: Neither indexes\nnor tables can wholly fit in RAM. At this point the speed of your\nlarge disk array becomes important, and you want a fast cachine RAID\ncontroller. Both of these items (disk array and RAID controller) are\nconsiderably more costly than 16 or 32 Gigs of RAM.\n\n> My thought was I could use option #2 and then set the number to a lower\n> amount. If the performance is bad then slowly work the number up.\n\nI'm not sure what you mean. Install less RAM and let PG do all the\ncaching? Usually a bad idea. Usually. I'm sure there are use cases\nthat it might be a good idea on. But keep in mind, a large amount of\nshared_buffers doesn't JUST buffer your reads, it also results in a\nmuch large memory space to keep track of in terms of things that need\nto get written out etc. I'm actually about to reduce the\nshared_buffers from 8G on one reporting server down to 1 or 2G cause\nthat's plenty, and it's having a hard time keeping up with the huge\ncheckpoints it's having to do.\n\n> Our server manager seems to think that I have way to much memory. He thinks\n> that we only need 5 Gig.\n\nHow much do you absolutely need to boot up, run postgresql, and not\nrun out of memory? That's what you \"need\" and it's probably around\n1Gig. It's just no less arbitraty than 5G. Did he show you how he\narrived at this number? If your DB is 17Gig on disk, it's foolish to\nbe cheap on memory.\n\n> I don’t really believe that. But I want to cover\n> myself. With money tight I don’t want to be the person who is wasting\n> resources. We need to replace our database servers so I want to do the\n> right thing.\n\nYou can waste your time (valuable but sunk cost) other people's time\n(more valuable, also sunk cost) or \"waste\" a few dollars on memory.\n24Gig isn't that expensive really compared to say 10 seconds per\ntransaction for 100 users, 1000 times a day. Or 11 user days in a\nsingle day. 10s of seconds start to add up.\n",
"msg_date": "Wed, 24 Mar 2010 20:06:10 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory question"
},
{
"msg_contents": "What Scott said ... seconded, all of it.\n\nI'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with\n2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great.\nHowever, it's a modest workload, most of the database is archival for data\nmining, and the \"working set\" for routine OLTP is pretty modest and easily\nfits in the 2GB, and it's back-ended on to a pretty decent EMC Clariion\nFibreChannel array. Not the typical case.\n\nFor physical x86 servers, brand name (e.g. Kingston) ECC memory is down to\n$25 per GB in 4GB DIMMs, and $36 per GB in 8GB DIMMs .... dollars to\ndoughnuts you have a server somewhere with 2GB or 4GB parts that can be\npulled and replaced with double the density, et voila, an extra 16GB of RAM\nfor about $500.\n\nLots and lots of RAM is absolutely, positively a no-brainer when trying to\nmake a DB go fast. If for no other reason than people get all starry eyed at\nGHz numbers, almost all computers tend to be CPU heavy and RAM light in\ntheir factory configs. I build a new little server for the house every 3-5\nyears, using desktop parts, and give it a mid-life upgrade with bigger\ndrives and doubling the RAM density.\n\nBig banks running huge Oracle OLTP setups use the strategy of essentially\nkeeping the whole thing in RAM .... HP shifts a lot of Superdome's maxed out\nwith 2TB of RAM into this market - and that RAM costs a lot more than $25 a\ngig ;-)\n\nCheers\nDave\n\nWhat Scott said ... seconded, all of it.I'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with 2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great. However, it's a modest workload, most of the database is archival for data mining, and the \"working set\" for routine OLTP is pretty modest and easily fits in the 2GB, and it's back-ended on to a pretty decent EMC Clariion FibreChannel array. Not the typical case.\nFor physical x86 servers, brand name (e.g. Kingston) ECC memory is down to $25 per GB in 4GB DIMMs, and $36 per GB in 8GB DIMMs .... dollars to doughnuts you have a server somewhere with 2GB or 4GB parts that can be pulled and replaced with double the density, et voila, an extra 16GB of RAM for about $500.\nLots and lots of RAM is absolutely, positively a no-brainer when trying to make a DB go fast. If for no other reason than people get all starry eyed at GHz numbers, almost all computers tend to be CPU heavy and RAM light in their factory configs. I build a new little server for the house every 3-5 years, using desktop parts, and give it a mid-life upgrade with bigger drives and doubling the RAM density.\nBig banks running huge Oracle OLTP setups use the strategy of essentially keeping the whole thing in RAM .... HP shifts a lot of Superdome's maxed out with 2TB of RAM into this market - and that RAM costs a lot more than $25 a gig ;-)\nCheersDave",
"msg_date": "Thu, 25 Mar 2010 01:28:24 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory question"
},
{
"msg_contents": "On Wed, 24 Mar 2010, Campbell, Lance wrote:\n> I have 24 Gig of memory on my server...\n>\n> Our server manager seems to think that I have way to much memory. He\n> thinks that we only need 5 Gig.\n\nYou organisation probably spent more money getting your server manager to \ninvestigate how much RAM you need and scaring you about wasting resources, \nthan it would cost to just slap 24GB in the machine.\n\n24GB is the least amount of RAM I would consider putting in a new server \nnowadays. It's so cheap.\n\nMatthew\n\n-- \n Lord grant me patience, and I want it NOW!\n",
"msg_date": "Thu, 25 Mar 2010 10:35:28 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory question"
}
] |
[
{
"msg_contents": "Hi All,\n\nCan anybody clarify on this, why wal_buffer is 64kb and what is advantages\nand disadvantages in increasing or decreasing the wal_buffer.\n\nRegards\nRaghav\n\nHi All,\n \nCan anybody clarify on this, why wal_buffer is 64kb and what is advantages and disadvantages in increasing or decreasing the wal_buffer.\n \nRegards\nRaghav",
"msg_date": "Thu, 25 Mar 2010 20:31:18 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why Wal_buffer is 64KB"
},
{
"msg_contents": "On Thu, 2010-03-25 at 20:31 +0530, Tadipathri Raghu wrote:\n> Hi All,\n> \n> Can anybody clarify on this, why wal_buffer is 64kb and what is\n> advantages and disadvantages in increasing or decreasing the\n> wal_buffer.\n\nThis is addressed in the documentation.\n\nhttp://www.postgresql.org/docs/8.4/interactive/wal-configuration.html\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Thu, 25 Mar 2010 12:05:47 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "On Thu, Mar 25, 2010 at 11:01 AM, Tadipathri Raghu <[email protected]> wrote:\n> Hi All,\n>\n> Can anybody clarify on this, why wal_buffer is 64kb and what is advantages\n> and disadvantages in increasing or decreasing the wal_buffer.\n>\n\nis 64kb just because by default we have low values in almost everything :)\nand the advantages is that if your average transaction is more than\n64kb large all wal data will be in memory until commit, actually i\nthing it should be large enough to accomodate more than one\ntransaction but i'm not sure about that one... i usually use 1Mb for\nOLTP systems\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n",
"msg_date": "Thu, 25 Mar 2010 13:15:58 -0400",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "\nIf you do large transactions, which emits large quantities of xlog, be \naware that while the previous xlog segment is being fsynced, no new writes \nhappen to the next segment. If you use large wal_buffers (more than 16 MB) \nthese buffers can absorb xlog data while the previous segment is being \nfsynced, which allows a higher throughput. However, large wal_buffers also \nmean the COMMIT of small transactions might find lots of data in the \nbuffers that noone has written/synced yet, which isn't good. If you use \ndedicated spindle(s) for the xlog, you can set the walwriter to be \nextremely aggressive (write every 5 ms for instance) and use fdatasync. \nThis way, at almost every rotation of the disk, xlog gets written. I've \nfound this configuration gives increased throughput, while not \ncompromising latency, but you need to test it for yourself, it depends on \nyour whole system.\n",
"msg_date": "Thu, 25 Mar 2010 19:14:38 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "Hi Pierre,\n\nFirst of all , I Thank all for sharing the information on this Issue.\n\nOn Thu, Mar 25, 2010 at 11:44 PM, Pierre C <[email protected]> wrote:\n>\n\n> If you do large transactions, which emits large quantities of xlog, be\n> aware that while the previous xlog segment is being fsynced, no new writes\n> happen to the next segment. If you use large wal_buffers (more than 16 MB)\n> these buffers can absorb xlog data while the previous segment is being\n> fsynced, which allows a higher throughput. However, large wal_buffers also\n> mean the COMMIT of small transactions might find lots of data in the buffers\n> that noone has written/synced yet, which isn't good. If you use dedicated\n> spindle(s) for the xlog, you can set the walwriter to be extremely\n> aggressive (write every 5 ms for instance) and use fdatasync. This way, at\n> almost every rotation of the disk, xlog gets written. I've found this\n> configuration gives increased throughput, while not compromising latency,\n> but you need to test it for yourself, it depends on your whole system.\n\n\nSmall testing is done from my end. I have created a \"test\" table with one\nrow and done insertion into it(10,00,000- rows). I have turned off fsync and\nsyncronous_commit. I saw there is fast insert if i do so, but if i turn it\non then there is latency.\n\nBefore fsync / syncronous_commit on\n============================\npostgres=# explain analyze insert into test\nvalues(generate_series(1,1000000));\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.015..6293.674\nrows=1000000 loops=1)\n Total runtime: *37406.012 ms*\n(2 rows)\n\n\nAfter fsync/syncronous_commit off\n=========================\npostgres=# explain analyze insert into test\nvalues(generate_series(1,1000000));\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.154..5801.584\nrows=1000000 loops=1)\n Total runtime: *29378.626 ms\n*(2 rows)\n\nI request to know here is, what would be xlog files with wal_buffer. Does\nxlog will recycle or grow in creating one more for this particular\ntransaction. Could you explain here, when wal_buffer is 64kb which is very\nsmall, and everything is in xlog files written, so wt happens if we increase\nthe wal_buffer here?\n\nRegards\nRaghav\n\nHi Pierre,\n \nFirst of all , I Thank all for sharing the information on this Issue.\nOn Thu, Mar 25, 2010 at 11:44 PM, Pierre C <[email protected]> wrote:\n\nIf you do large transactions, which emits large quantities of xlog, be aware that while the previous xlog segment is being fsynced, no new writes happen to the next segment. If you use large wal_buffers (more than 16 MB) these buffers can absorb xlog data while the previous segment is being fsynced, which allows a higher throughput. However, large wal_buffers also mean the COMMIT of small transactions might find lots of data in the buffers that noone has written/synced yet, which isn't good. If you use dedicated spindle(s) for the xlog, you can set the walwriter to be extremely aggressive (write every 5 ms for instance) and use fdatasync. This way, at almost every rotation of the disk, xlog gets written. I've found this configuration gives increased throughput, while not compromising latency, but you need to test it for yourself, it depends on your whole system.\n \nSmall testing is done from my end. I have created a \"test\" table with one row and done insertion into it(10,00,000- rows). I have turned off fsync and syncronous_commit. I saw there is fast insert if i do so, but if i turn it on then there is latency. \n \nBefore fsync / syncronous_commit on\n============================\npostgres=# explain analyze insert into test values(generate_series(1,1000000)); QUERY PLAN---------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.015..6293.674 rows=1000000 loops=1) Total runtime: 37406.012 ms(2 rows)\n \nAfter fsync/syncronous_commit off\n=========================\npostgres=# explain analyze insert into test values(generate_series(1,1000000)); QUERY PLAN---------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.154..5801.584 rows=1000000 loops=1) Total runtime: 29378.626 ms(2 rows)\n \nI request to know here is, what would be xlog files with wal_buffer. Does xlog will recycle or grow in creating one more for this particular transaction. Could you explain here, when wal_buffer is 64kb which is very small, and everything is in xlog files written, so wt happens if we increase the wal_buffer here?\n \nRegards\nRaghav",
"msg_date": "Fri, 26 Mar 2010 08:49:21 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "\n> After fsync/syncronous_commit off\n\nDo not use fsync off, it is not safe. Who cares about the performance of \nfsync=off, when in practice you'd never use it with real data.\nsynchronnous_commit=off is fine for some applications, though.\n\nMore info is needed about your configuration (hardware, drives, memory, \netc).\n",
"msg_date": "Fri, 26 Mar 2010 14:43:45 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "On Fri, Mar 26, 2010 at 7:43 AM, Pierre C <[email protected]> wrote:\n>\n>> After fsync/syncronous_commit off\n>\n> Do not use fsync off, it is not safe. Who cares about the performance of\n> fsync=off, when in practice you'd never use it with real data.\n> synchronnous_commit=off is fine for some applications, though.\n\nThere are situations where it's ok, when all the data are\nreproduceable from other sources, etc. for instance I have a\nreporting server that is a slony slave that runs with fsync off. If\nit does crash and I can recreate the node in an hour or so and be back\nonline. With fsync off the machine is too slow to do its job, and\nit's not the primary repo of the real data, so it's ok there.\n",
"msg_date": "Fri, 26 Mar 2010 08:00:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "Hi All,\n\nThank you for all the support.\n\nI have noticed one more thing here, that if you turn off the fsync and try\nto run the transaction than its breaking the currnet filenode and generating\nanother filenode. Is it true that whenever you turn off or on the fsync the\nfilenode will break and create one more on that table.\n\nRegards\nRaghavendra\n\nOn Fri, Mar 26, 2010 at 7:30 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Fri, Mar 26, 2010 at 7:43 AM, Pierre C <[email protected]> wrote:\n> >\n> >> After fsync/syncronous_commit off\n> >\n> > Do not use fsync off, it is not safe. Who cares about the performance of\n> > fsync=off, when in practice you'd never use it with real data.\n> > synchronnous_commit=off is fine for some applications, though.\n>\n> There are situations where it's ok, when all the data are\n> reproduceable from other sources, etc. for instance I have a\n> reporting server that is a slony slave that runs with fsync off. If\n> it does crash and I can recreate the node in an hour or so and be back\n> online. With fsync off the machine is too slow to do its job, and\n> it's not the primary repo of the real data, so it's ok there.\n>\n\nHi All,\n \nThank you for all the support.\n \nI have noticed one more thing here, that if you turn off the fsync and try to run the transaction than its breaking the currnet filenode and generating another filenode. Is it true that whenever you turn off or on the fsync the filenode will break and create one more on that table.\n \nRegards\nRaghavendra\nOn Fri, Mar 26, 2010 at 7:30 PM, Scott Marlowe <[email protected]> wrote:\n\nOn Fri, Mar 26, 2010 at 7:43 AM, Pierre C <[email protected]> wrote:>>> After fsync/syncronous_commit off>> Do not use fsync off, it is not safe. Who cares about the performance of\n> fsync=off, when in practice you'd never use it with real data.> synchronnous_commit=off is fine for some applications, though.There are situations where it's ok, when all the data are\nreproduceable from other sources, etc. for instance I have areporting server that is a slony slave that runs with fsync off. Ifit does crash and I can recreate the node in an hour or so and be backonline. With fsync off the machine is too slow to do its job, and\nit's not the primary repo of the real data, so it's ok there.",
"msg_date": "Mon, 29 Mar 2010 11:30:43 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "On Mon, Mar 29, 2010 at 12:00 AM, Tadipathri Raghu <[email protected]> wrote:\n> Hi All,\n>\n> Thank you for all the support.\n>\n> I have noticed one more thing here, that if you turn off the fsync and try\n> to run the transaction than its breaking the currnet filenode and generating\n> another filenode. Is it true that whenever you turn off or on the fsync the\n> filenode will break and create one more on that table.\n\n From what I understand, with fsync on or off the same stuff gets\nwritten. It's just not guaranteed to go out in the right order or\nright now, but eventually.\n",
"msg_date": "Mon, 29 Mar 2010 00:45:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "Hi Scott,\n\nYes, May i know any particular reason for behaving this. Are its looking for\nany consistency. I havnt got any clear picture here.\nCould you Please explain this..\n\nThanks & Regards\nRaghavendra\n\nOn Mon, Mar 29, 2010 at 12:15 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Mar 29, 2010 at 12:00 AM, Tadipathri Raghu <[email protected]>\n> wrote:\n> > Hi All,\n> >\n> > Thank you for all the support.\n> >\n> > I have noticed one more thing here, that if you turn off the fsync and\n> try\n> > to run the transaction than its breaking the currnet filenode and\n> generating\n> > another filenode. Is it true that whenever you turn off or on the fsync\n> the\n> > filenode will break and create one more on that table.\n>\n> From what I understand, with fsync on or off the same stuff gets\n> written. It's just not guaranteed to go out in the right order or\n> right now, but eventually.\n>\n\nHi Scott,\n \nYes, May i know any particular reason for behaving this. Are its looking for any consistency. I havnt got any clear picture here. \nCould you Please explain this..\n \nThanks & Regards\nRaghavendra\nOn Mon, Mar 29, 2010 at 12:15 PM, Scott Marlowe <[email protected]> wrote:\n\nOn Mon, Mar 29, 2010 at 12:00 AM, Tadipathri Raghu <[email protected]> wrote:> Hi All,>> Thank you for all the support.>> I have noticed one more thing here, that if you turn off the fsync and try\n> to run the transaction than its breaking the currnet filenode and generating> another filenode. Is it true that whenever you turn off or on the fsync the> filenode will break and create one more on that table.\nFrom what I understand, with fsync on or off the same stuff getswritten. It's just not guaranteed to go out in the right order orright now, but eventually.",
"msg_date": "Mon, 29 Mar 2010 12:35:50 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why Wal_buffer is 64KB"
},
{
"msg_contents": "On Mon, Mar 29, 2010 at 2:00 AM, Tadipathri Raghu <[email protected]> wrote:\n> I have noticed one more thing here, that if you turn off the fsync and try\n> to run the transaction than its breaking the currnet filenode and generating\n> another filenode. Is it true that whenever you turn off or on the fsync the\n> filenode will break and create one more on that table.\n\nI don't know what you mean by a filenode. Changing the fsync\nparameter doesn't cause any additional files to be created or written.\n\n...Robert\n",
"msg_date": "Tue, 30 Mar 2010 10:43:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why Wal_buffer is 64KB"
}
] |
[
{
"msg_contents": "\nHi everyone,\n\nI've been trying to reduce both memory usage and runtime for a query. \nComments/suggestions gratefully received. Details are at\n\nhttp://bulldog.duhs.duke.edu/~faheem/snppy/opt.pdf\n\nSee particularly Section 1 - Background and Discussion.\n\nIf you want a text version, see\n\nhttp://bulldog.duhs.duke.edu/~faheem/snppy/opt.tex\n\nFor background see\n\nhttp://bulldog.duhs.duke.edu/~faheem/snppy/diag.pdf (text version \nhttp://bulldog.duhs.duke.edu/~faheem/snppy/diag.tex) and \nhttp://bulldog.duhs.duke.edu/~faheem/snppy/snppy.pdf\n\nPlease CC any replies to me at the above email address. Thanks.\n\n Regards, Faheem.\n",
"msg_date": "Fri, 26 Mar 2010 01:27:54 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "experiments in query optimization"
},
{
"msg_contents": "On Thu, Mar 25, 2010 at 3:57 PM, Faheem Mitha <[email protected]> wrote:\n>\n> Hi everyone,\n>\n> I've been trying to reduce both memory usage and runtime for a query.\n> Comments/suggestions gratefully received. Details are at\n>\n> http://bulldog.duhs.duke.edu/~faheem/snppy/opt.pdf\n>\n> See particularly Section 1 - Background and Discussion.\n>\n> If you want a text version, see\n>\n> http://bulldog.duhs.duke.edu/~faheem/snppy/opt.tex\n>\n> For background see\n>\n> http://bulldog.duhs.duke.edu/~faheem/snppy/diag.pdf (text version\n> http://bulldog.duhs.duke.edu/~faheem/snppy/diag.tex) and\n> http://bulldog.duhs.duke.edu/~faheem/snppy/snppy.pdf\n>\n> Please CC any replies to me at the above email address. Thanks.\n\nDidn't you (or someone) post about these queries before?\n\nIt's not really too clear to me from reading this what specific\nquestions you're trying to answer. One random thought: WHERE\nrow_number() = 1 is not too efficient. Try using LIMIT or DISTINCT ON\ninstead.\n\nIf you're concerned about memory usage, try reducing work_mem; you've\nprobably got it set to something huge.\n\nYou might need to create some indices, too.\n\n...Robert\n",
"msg_date": "Mon, 29 Mar 2010 14:02:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Mon, 29 Mar 2010, Robert Haas wrote:\n\n> On Thu, Mar 25, 2010 at 3:57 PM, Faheem Mitha <[email protected]> wrote:\n>>\n>> Hi everyone,\n>>\n>> I've been trying to reduce both memory usage and runtime for a query.\n>> Comments/suggestions gratefully received. Details are at\n>>\n>> http://bulldog.duhs.duke.edu/~faheem/snppy/opt.pdf\n>>\n>> See particularly Section 1 - Background and Discussion.\n>>\n>> If you want a text version, see\n>>\n>> http://bulldog.duhs.duke.edu/~faheem/snppy/opt.tex\n>>\n>> For background see\n>>\n>> http://bulldog.duhs.duke.edu/~faheem/snppy/diag.pdf (text version\n>> http://bulldog.duhs.duke.edu/~faheem/snppy/diag.tex) and\n>> http://bulldog.duhs.duke.edu/~faheem/snppy/snppy.pdf\n>>\n>> Please CC any replies to me at the above email address. Thanks.\n>\n> Didn't you (or someone) post about these queries before?\n\nI did write to the list about an earlier version of these queries, yes. In \nfact you replied to that message.\n\n> It's not really too clear to me from reading this what specific\n> questions you're trying to answer.\n\nQuote from opt.{tex/pdf}, Section 1:\n\n\"If I have to I can use Section~\\ref{ped_hybrid} and \nSection~\\ref{tped_hybrid}, but I am left wondering why I get the \nperformance I do out of the earlier versions. Specifically, why is \nSection~\\ref{ped_bigjoin} so much slower than Section~\\ref{ped_trunc}, and \nwhy does the memory usage in Section~\\ref{ped_phenoout} blow up relative \nto Section~\\ref{ped_bigjoin} and Section~\\ref{ped_trunc}?\"\n\n> One random thought: WHERE row_number() = 1 is not too efficient.\n> Try using LIMIT or DISTINCT ON instead.\n\nPossibly. However, the CTE that uses\n\nWHERE row_number() = 1\n\ndoesn't dominate the runtime or memory usage, so I'm not too concerned\nabout it.\n\n> If you're concerned about memory usage, try reducing work_mem; you've \n> probably got it set to something huge.\n\nwork_mem = 1 GB (see diag.{tex/pdf}).\n\nThe point isn't that I'm using so much memory. Again, my question is, why \nare these changes affecting memory usage so drastically?\n\n> You might need to create some indices, too.\n\nOk. To what purpose? This query picks up everything from the tables and \nthe planner does table scans, so conventional wisdom and indeed my \nexperience, says that indexes are not going to be so useful.\n\n Regards, Faheem.\n",
"msg_date": "Tue, 30 Mar 2010 00:01:44 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Mon, Mar 29, 2010 at 2:31 PM, Faheem Mitha <[email protected]> wrote:\n>> It's not really too clear to me from reading this what specific\n>> questions you're trying to answer.\n>\n> Quote from opt.{tex/pdf}, Section 1:\n>\n> \"If I have to I can use Section~\\ref{ped_hybrid} and\n> Section~\\ref{tped_hybrid}, but I am left wondering why I get the performance\n> I do out of the earlier versions. Specifically, why is\n> Section~\\ref{ped_bigjoin} so much slower than Section~\\ref{ped_trunc}, and\n> why does the memory usage in Section~\\ref{ped_phenoout} blow up relative to\n> Section~\\ref{ped_bigjoin} and Section~\\ref{ped_trunc}?\"\n\nHere and in the document, you refer to section numbers for the\n\"hybrid\" version but I don't see where you define what the \"hybrid\"\nversion actually is. And the differences between your queries are not\nreal clear either - first you say you took out pheno and sex because\nthey weren't necessary, but then you decide to put them back. I don't\nknow what that means. If they're not necessary, leave them out.\n\n>> One random thought: WHERE row_number() = 1 is not too efficient.\n>> Try using LIMIT or DISTINCT ON instead.\n>\n> Possibly. However, the CTE that uses\n>\n> WHERE row_number() = 1\n>\n> doesn't dominate the runtime or memory usage, so I'm not too concerned\n> about it.\n\nHmm, you might be right.\n\n>> If you're concerned about memory usage, try reducing work_mem; you've\n>> probably got it set to something huge.\n>\n> work_mem = 1 GB (see diag.{tex/pdf}).\n>\n> The point isn't that I'm using so much memory. Again, my question is, why\n> are these changes affecting memory usage so drastically?\n\nWell each sort or hash can use an amount of memory that is limited\nfrom above by work_mem. So if you write the query in a way that\ninvolves more sorts or hashes, each one can add up to 1GB to your\nmemory usage, plus overhead. However, it doesn't look like any of\nyour queries including 30 sorts or hashes, so I'm thinking that the\nRSS number probably also includes some of the shared memory that has\nbeen mapped into each backend's address space. RSS is not a terribly\nreliable number when dealing with shared memory; it's hard to say what\nthat really means.\n\n>> You might need to create some indices, too.\n>\n> Ok. To what purpose? This query picks up everything from the tables and the\n> planner does table scans, so conventional wisdom and indeed my experience,\n> says that indexes are not going to be so useful.\n\nWell, a hash join is not usually the first thing that pops to mind\nwhen dealing with a table that has 825 million rows (geno). I don't\nknow if a nested loop with inner-indexscan would be faster, but it\nwould almost certainly use less memory.\n\n...Robert\n",
"msg_date": "Mon, 29 Mar 2010 15:55:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Mon, 29 Mar 2010, Robert Haas wrote:\n\n> On Mon, Mar 29, 2010 at 2:31 PM, Faheem Mitha <[email protected]> wrote:\n>>> It's not really too clear to me from reading this what specific\n>>> questions you're trying to answer.\n>>\n>> Quote from opt.{tex/pdf}, Section 1:\n>>\n>> \"If I have to I can use Section~\\ref{ped_hybrid} and\n>> Section~\\ref{tped_hybrid}, but I am left wondering why I get the performance\n>> I do out of the earlier versions. Specifically, why is\n>> Section~\\ref{ped_bigjoin} so much slower than Section~\\ref{ped_trunc}, and\n>> why does the memory usage in Section~\\ref{ped_phenoout} blow up relative to\n>> Section~\\ref{ped_bigjoin} and Section~\\ref{ped_trunc}?\"\n>\n> Here and in the document, you refer to section numbers for the\n> \"hybrid\" version but I don't see where you define what the \"hybrid\"\n> version actually is.\n\nIt is defined later in the file. I don't know if you are looking at the \npdf, but if so, it is Section 2.4 (for the hybrid PED query). In the text \nfile, I guess the easist way would be to grep for the label ped_hybrid.\n\n> And the differences between your queries are not real clear either - \n> first you say you took out pheno and sex because they weren't necessary, \n> but then you decide to put them back. I don't know what that means. \n> If they're not necessary, leave them out.\n\nI don't see where I say that pheno and sex weren't necessary. In fact, the \nword 'necessary' does not appear in the opt document. I took them out to \nsee how it would affect performance. Which is does, dramatically. I say\n\n\"So, I decided to remove the joins to tables corresponding to the patient \ndata, namely pheno and sex, and the runtime dropped to 150 min, while the \nmemory stayed around 5G.\"\n\nMaybe I wasn't being sufficiently explicit here. Perhaps\n\n\"So, I decided to remove the joins to tables corresponding to the patient\ndata, namely pheno and sex, to see how it would affect performance...\"\n\nwould have been better.\n\n>>> One random thought: WHERE row_number() = 1 is not too efficient.\n>>> Try using LIMIT or DISTINCT ON instead.\n>>\n>> Possibly. However, the CTE that uses\n>>\n>> WHERE row_number() = 1\n>>\n>> doesn't dominate the runtime or memory usage, so I'm not too concerned\n>> about it.\n>\n> Hmm, you might be right.\n>\n>>> If you're concerned about memory usage, try reducing work_mem; you've\n>>> probably got it set to something huge.\n>>\n>> work_mem = 1 GB (see diag.{tex/pdf}).\n>>\n>> The point isn't that I'm using so much memory. Again, my question is, why\n>> are these changes affecting memory usage so drastically?\n>\n> Well each sort or hash can use an amount of memory that is limited\n> from above by work_mem. So if you write the query in a way that\n> involves more sorts or hashes, each one can add up to 1GB to your\n> memory usage, plus overhead. However, it doesn't look like any of\n> your queries including 30 sorts or hashes, so I'm thinking that the\n> RSS number probably also includes some of the shared memory that has\n> been mapped into each backend's address space. RSS is not a terribly\n> reliable number when dealing with shared memory; it's hard to say what\n> that really means.\n\n>>> You might need to create some indices, too.\n\n>> Ok. To what purpose? This query picks up everything from the tables and the\n>> planner does table scans, so conventional wisdom and indeed my experience,\n>> says that indexes are not going to be so useful.\n\n> Well, a hash join is not usually the first thing that pops to mind when \n> dealing with a table that has 825 million rows (geno). I don't know if \n> a nested loop with inner-indexscan would be faster, but it would almost \n> certainly use less memory.\n\nCan you provide an illustration of what you mean? I don't know what a \n\"nested loop with inner-indexscan\" is in this context.\n\n Regards, Faheem.\n",
"msg_date": "Tue, 30 Mar 2010 01:52:05 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "Faheem Mitha <[email protected]> wrote:\n \n>> If you're concerned about memory usage, try reducing work_mem;\n>> you've probably got it set to something huge.\n> \n> work_mem = 1 GB (see diag.{tex/pdf}).\n> \n> The point isn't that I'm using so much memory. Again, my question\n> is, why are these changes affecting memory usage so drastically?\n \nBecause the planner looks at a very wide variety of plans, some of\nwhich may use many allocations of work_mem size, and some of which\ndon't. The costs are compared and the lowest cost one is chosen. If\nyou are close to the \"tipping point\" then even a very small change\nmight affect which is chosen. It pays to keep the work_mem setting\nsane so that unexpected plan changes don't cause problems.\n \nLook at the plans and their costs to get a feel for what's being\nchosen and why. Although it's a very bad idea to use these in\nproduction, you can often shift the plan to something you *think*\nwould be better using the enable_* settings, to see what the planner\nthinks such a plan will cost and where it thinks the cost would be;\nthat can help in tuning the settings.\n \n>> You might need to create some indices, too.\n> \n> Ok. To what purpose? This query picks up everything from the\n> tables and the planner does table scans, so conventional wisdom\n> and indeed my experience, says that indexes are not going to be so\n> useful.\n \nThere are situations where scanning the entire table to build up a\nhash table is more expensive than using an index. Why not test it?\n \n-Kevin\n",
"msg_date": "Tue, 30 Mar 2010 11:08:10 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Tue, 30 Mar 2010, Kevin Grittner wrote:\n\n> Faheem Mitha <[email protected]> wrote:\n>\n>>> If you're concerned about memory usage, try reducing work_mem;\n>>> you've probably got it set to something huge.\n>>\n>> work_mem = 1 GB (see diag.{tex/pdf}).\n>>\n>> The point isn't that I'm using so much memory. Again, my question\n>> is, why are these changes affecting memory usage so drastically?\n>\n> Because the planner looks at a very wide variety of plans, some of\n> which may use many allocations of work_mem size, and some of which\n> don't. The costs are compared and the lowest cost one is chosen. If\n> you are close to the \"tipping point\" then even a very small change\n> might affect which is chosen. It pays to keep the work_mem setting\n> sane so that unexpected plan changes don't cause problems.\n\nSure, but define sane setting, please. I guess part of the point is that \nI'm trying to keep memory low, and it seems this is not part of the \nplanner's priorities. That it, it does not take memory usage into \nconsideration when choosing a plan. If that it wrong, let me know, but \nthat is my understanding.\n\n> Look at the plans and their costs to get a feel for what's being\n> chosen and why. Although it's a very bad idea to use these in\n> production, you can often shift the plan to something you *think*\n> would be better using the enable_* settings, to see what the planner\n> thinks such a plan will cost and where it thinks the cost would be;\n> that can help in tuning the settings.\n\nRight. You mean to close off certain options to the planner using 'Planner \nMethod Configuration'. I suppose one can also use 'Planner Cost Constants' \nto alter plan behaviour. I haven't tried changing these.\n\n>>> You might need to create some indices, too.\n>>\n>> Ok. To what purpose? This query picks up everything from the\n>> tables and the planner does table scans, so conventional wisdom\n>> and indeed my experience, says that indexes are not going to be so\n>> useful.\n>\n> There are situations where scanning the entire table to build up a\n> hash table is more expensive than using an index. Why not test it?\n\nCertainly, but I don't know what you and Robert have in mind, and I'm not \nexperienced enough to make an educated guess. I'm open to specific \nsuggestions.\n\n Regards, Faheem.\n",
"msg_date": "Tue, 30 Mar 2010 22:00:14 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\nOn thing which I haven't really mentioned in this thread or in my writeup, \nis that the planners value for the number of rows in geno is way off base \nsome of the time. It is around 800 million, it thinks it is 100 million. I \ndon't know if this is significant or not, or what to do about it.\n\neg. in the ped_bigjoin EXPLAIN ANALYZE VERBOSE:\n\n -> Sort (cost=56855882.72..57144683.54 rows=115520330 width=42) \n(actual time=23027732.092..37113627.380 rows=823086774 loops=1)\n Output: (CASE WHEN (hapmap.geno.snpval_id = (-1)) THEN '0 0'::text \nWHEN (hapmap.geno.snpval_id = 0) THEN \n(((dedup_patient_anno.allelea_id)::text || ' '::text) || \n(dedup_patient_anno.allelea_id)::text) WHEN (hapmap.geno.snpval_id = 1) \nTHEN (((dedup_patient_anno.allelea_id)::text || ' '::text) || \n(dedup_patient_anno.alleleb_id)::text) WHEN (hapmap.geno.snpval_id = 2) \nTHEN (((dedup_patient_anno.alleleb_id)::text || ' '::text) || \n(dedup_patient_anno.alleleb_id)::text) ELSE NULL::text END), \nhapmap.geno.idlink_id, hapmap.geno.anno_id, pheno.patientid, \npheno.phenotype, sex.code\n\n Faheem.\n\nOn Tue, 30 Mar 2010, Kevin Grittner wrote:\n\n> Faheem Mitha <[email protected]> wrote:\n>\n>>> If you're concerned about memory usage, try reducing work_mem;\n>>> you've probably got it set to something huge.\n>>\n>> work_mem = 1 GB (see diag.{tex/pdf}).\n>>\n>> The point isn't that I'm using so much memory. Again, my question\n>> is, why are these changes affecting memory usage so drastically?\n>\n> Because the planner looks at a very wide variety of plans, some of\n> which may use many allocations of work_mem size, and some of which\n> don't. The costs are compared and the lowest cost one is chosen. If\n> you are close to the \"tipping point\" then even a very small change\n> might affect which is chosen. It pays to keep the work_mem setting\n> sane so that unexpected plan changes don't cause problems.\n>\n> Look at the plans and their costs to get a feel for what's being\n> chosen and why. Although it's a very bad idea to use these in\n> production, you can often shift the plan to something you *think*\n> would be better using the enable_* settings, to see what the planner\n> thinks such a plan will cost and where it thinks the cost would be;\n> that can help in tuning the settings.\n>\n>>> You might need to create some indices, too.\n>>\n>> Ok. To what purpose? This query picks up everything from the\n>> tables and the planner does table scans, so conventional wisdom\n>> and indeed my experience, says that indexes are not going to be so\n>> useful.\n>\n> There are situations where scanning the entire table to build up a\n> hash table is more expensive than using an index. Why not test it?\n>\n> -Kevin\n>\n>\n",
"msg_date": "Tue, 30 Mar 2010 22:49:40 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha <[email protected]> wrote:\n> Sure, but define sane setting, please. I guess part of the point is that I'm\n> trying to keep memory low, and it seems this is not part of the planner's\n> priorities. That it, it does not take memory usage into consideration when\n> choosing a plan. If that it wrong, let me know, but that is my\n> understanding.\n\nI don't understand quite why you're confused here. We've already\nexplained to you that the planner will not employ a plan that uses\nmore than the amount of memory defined by work_mem for each sort or\nhash.\n\nTypical settings for work_mem are between 1MB and 64MB. 1GB is enormous.\n\n>>>> You might need to create some indices, too.\n>>>\n>>> Ok. To what purpose? This query picks up everything from the\n>>> tables and the planner does table scans, so conventional wisdom\n>>> and indeed my experience, says that indexes are not going to be so\n>>> useful.\n>>\n>> There are situations where scanning the entire table to build up a\n>> hash table is more expensive than using an index. Why not test it?\n>\n> Certainly, but I don't know what you and Robert have in mind, and I'm not\n> experienced enough to make an educated guess. I'm open to specific\n> suggestions.\n\nTry creating an index on geno on the columns that are being used for the join.\n\n...Robert\n",
"msg_date": "Tue, 30 Mar 2010 15:59:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Tue, 30 Mar 2010, Faheem Mitha wrote:\n>>> work_mem = 1 GB (see diag.{tex/pdf}).\n>\n> Sure, but define sane setting, please. I guess part of the point is that I'm \n> trying to keep memory low\n\nYou're trying to keep memory usage low, but you have work_mem set to 1GB?\n\nMatthew\n\n-- \n\"Prove to thyself that all circuits that radiateth and upon which thou worketh\n are grounded, lest they lift thee to high-frequency potential and cause thee\n to radiate also. \" -- The Ten Commandments of Electronics\n",
"msg_date": "Wed, 31 Mar 2010 10:29:58 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Wed, 31 Mar 2010, Matthew Wakeling wrote:\n\n> On Tue, 30 Mar 2010, Faheem Mitha wrote:\n>>>> work_mem = 1 GB (see diag.{tex/pdf}).\n>> \n>> Sure, but define sane setting, please. I guess part of the point is that \n>> I'm trying to keep memory low\n>\n> You're trying to keep memory usage low, but you have work_mem set to 1GB?\n\nI'm trying to keep both runtime and memory usage low. I assume that with \nlower levels of memory, the runtime would be longer, other things being \nequal.\n Regards, Faheem.\n",
"msg_date": "Wed, 31 Mar 2010 15:06:11 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n[If Kevin Grittner reads this, please fix your email address. I am \ngetting bounces from your email address.]\n\nOn Tue, 30 Mar 2010, Robert Haas wrote:\n\n> On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha <[email protected]> wrote:\n>> Sure, but define sane setting, please. I guess part of the point is that I'm\n>> trying to keep memory low, and it seems this is not part of the planner's\n>> priorities. That it, it does not take memory usage into consideration when\n>> choosing a plan. If that it wrong, let me know, but that is my\n>> understanding.\n>\n> I don't understand quite why you're confused here. We've already\n> explained to you that the planner will not employ a plan that uses\n> more than the amount of memory defined by work_mem for each sort or\n> hash.\n\n> Typical settings for work_mem are between 1MB and 64MB. 1GB is enormous.\n\nI don't think I am confused. To be clear, when I said \"it does not take \nmemory usage into consideration' I was talking about overall memory usage. \nLet me summarize:\n\nThe planner will choose the plan with the minimum total cost, with the \nconstraint that the number of memory used for each of certain steps is \nless than work_mem. In other words with k such steps it can use at most\n\nk(plan)*work_mem\n\nmemory where k(plan) denotes that k is a function of the plan. (I'm \nassuming here that memory is not shared between the different steps). \nHowever, k(plan)*work_mem is not itself bounded. I fail to see how \nreducing work_mem significantly would help me. This would mean that the \ncurrent plans I am using would likely be ruled out, and I would be left \nwith plans which, by definition, would have larger cost and so longer run \ntimes. The current runtimes are already quite long - for the PED query, \nthe best I can do with work_mem=1 GB is 2 1/2 hrs, and that is after \nsplitting the query into two pieces.\n\nI might actually be better off *increasing* the memory, since then the \nplanner would have more flexibility to choose plans where the individual \nsteps might require more memory, but the overall memory sum might be \nlower.\n\n>>>>> You might need to create some indices, too.\n>>>>\n>>>> Ok. To what purpose? This query picks up everything from the\n>>>> tables and the planner does table scans, so conventional wisdom\n>>>> and indeed my experience, says that indexes are not going to be so\n>>>> useful.\n>>>\n>>> There are situations where scanning the entire table to build up a\n>>> hash table is more expensive than using an index. �Why not test it?\n>>\n>> Certainly, but I don't know what you and Robert have in mind, and I'm not\n>> experienced enough to make an educated guess. I'm open to specific\n>> suggestions.\n>\n> Try creating an index on geno on the columns that are being used for the join.\n\nOk, I'll try that. I guess the cols in question on geno are idlink_id and \nanno_id. I thought that I already had indexes on them, but no. Maybe I had \nindexes, but removed them.\n\nIf I understand the way this works, if you request, say an INNER JOIN, the \nplanner can choose different ways/algorithms to do this, as in \nhttp://en.wikipedia.org/wiki/Join_(SQL)#Nested_loops . It may choose a \nhash join, or an nested loop join or something else, based on cost. If the \nindexes don't exist that may make the inner loop join more expensive, so \ntip the balance in favor of using a hash join. However, I have no way to \ncontrol which option it chooses, short of disabling eg. the hash join \noption, which is not an option for production usage anyway. Correct?\n\n Regards, Faheem.",
"msg_date": "Wed, 31 Mar 2010 15:40:45 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Wed, Mar 31, 2010 at 6:10 AM, Faheem Mitha <[email protected]> wrote:\n>\n> [If Kevin Grittner reads this, please fix your email address. I am getting\n> bounces from your email address.]\n>\n> On Tue, 30 Mar 2010, Robert Haas wrote:\n>\n>> On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha <[email protected]>\n>> wrote:\n>>>\n>>> Sure, but define sane setting, please. I guess part of the point is that\n>>> I'm\n>>> trying to keep memory low, and it seems this is not part of the planner's\n>>> priorities. That it, it does not take memory usage into consideration\n>>> when\n>>> choosing a plan. If that it wrong, let me know, but that is my\n>>> understanding.\n>>\n>> I don't understand quite why you're confused here. We've already\n>> explained to you that the planner will not employ a plan that uses\n>> more than the amount of memory defined by work_mem for each sort or\n>> hash.\n>\n>> Typical settings for work_mem are between 1MB and 64MB. 1GB is enormous.\n>\n> I don't think I am confused. To be clear, when I said \"it does not take\n> memory usage into consideration' I was talking about overall memory usage.\n> Let me summarize:\n>\n> The planner will choose the plan with the minimum total cost, with the\n> constraint that the number of memory used for each of certain steps is less\n> than work_mem. In other words with k such steps it can use at most\n>\n> k(plan)*work_mem\n>\n> memory where k(plan) denotes that k is a function of the plan. (I'm assuming\n> here that memory is not shared between the different steps). However,\n> k(plan)*work_mem is not itself bounded. I fail to see how reducing work_mem\n> significantly would help me. This would mean that the current plans I am\n> using would likely be ruled out, and I would be left with plans which, by\n> definition, would have larger cost and so longer run times. The current\n> runtimes are already quite long - for the PED query, the best I can do with\n> work_mem=1 GB is 2 1/2 hrs, and that is after splitting the query into two\n> pieces.\n>\n> I might actually be better off *increasing* the memory, since then the\n> planner would have more flexibility to choose plans where the individual\n> steps might require more memory, but the overall memory sum might be lower.\n\nOK, your understanding is correct.\n\n>>>>>> You might need to create some indices, too.\n>>>>>\n>>>>> Ok. To what purpose? This query picks up everything from the\n>>>>> tables and the planner does table scans, so conventional wisdom\n>>>>> and indeed my experience, says that indexes are not going to be so\n>>>>> useful.\n>>>>\n>>>> There are situations where scanning the entire table to build up a\n>>>> hash table is more expensive than using an index. Why not test it?\n>>>\n>>> Certainly, but I don't know what you and Robert have in mind, and I'm not\n>>> experienced enough to make an educated guess. I'm open to specific\n>>> suggestions.\n>>\n>> Try creating an index on geno on the columns that are being used for the\n>> join.\n>\n> Ok, I'll try that. I guess the cols in question on geno are idlink_id and\n> anno_id. I thought that I already had indexes on them, but no. Maybe I had\n> indexes, but removed them.\n>\n> If I understand the way this works, if you request, say an INNER JOIN, the\n> planner can choose different ways/algorithms to do this, as in\n> http://en.wikipedia.org/wiki/Join_(SQL)#Nested_loops . It may choose a hash\n> join, or an nested loop join or something else, based on cost. If the\n> indexes don't exist that may make the inner loop join more expensive, so tip\n> the balance in favor of using a hash join. However, I have no way to control\n> which option it chooses, short of disabling eg. the hash join option, which\n> is not an option for production usage anyway. Correct?\n\nYep.\n\n...Robert\n",
"msg_date": "Wed, 31 Mar 2010 11:04:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Wed, 31 Mar 2010, Faheem Mitha wrote:\n\n> On Tue, 30 Mar 2010, Robert Haas wrote:\n>\n>> On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha <[email protected]>\n\n>>>>>> You might need to create some indices, too.\n>>>>> \n>>>>> Ok. To what purpose? This query picks up everything from the\n>>>>> tables and the planner does table scans, so conventional wisdom\n>>>>> and indeed my experience, says that indexes are not going to be so\n>>>>> useful.\n>>>> \n>>>> There are situations where scanning the entire table to build up a\n>>>> hash table is more expensive than using an index. Why not test it?\n>>> \n>>> Certainly, but I don't know what you and Robert have in mind, and I'm not\n>>> experienced enough to make an educated guess. I'm open to specific\n>>> suggestions.\n>> \n>> Try creating an index on geno on the columns that are being used for the \n>> join.\n>\n> Ok, I'll try that. I guess the cols in question on geno are idlink_id and \n> anno_id. I thought that I already had indexes on them, but no. Maybe I had \n> indexes, but removed them.\n\nLooking at this more closely, idlink_id and anno_id are primary keys, so \nalready have indexes on them, so my understanding (from the docs) is there \nis no purpose in creating them. That's why I removed the indexes that were \nthere (back last August, actually, according to my logs). Anyway, doesn't \nlook there is anything I can do here. Does anyone have additions or \ncorrections to this?\n\n Regards, Faheem.",
"msg_date": "Thu, 1 Apr 2010 17:16:40 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Thu, Apr 1, 2010 at 7:46 AM, Faheem Mitha <[email protected]> wrote:\n\n\n> Looking at this more closely, idlink_id and anno_id are primary keys, so\n> already have indexes on them, so my understanding (from the docs) is there\n> is no purpose in creating them. That's why I removed the indexes that were\n> there (back last August, actually, according to my logs). Anyway, doesn't\n> look there is anything I can do here. Does anyone have additions or\n> corrections to this?\n>\n>\nWhen you do a join, you typically have a foreign key in one table\nreferencing a primary key in another table. While designating a foreign key\ndoes put a constraint on the key to ensure referential integrity, it does\nnot put an index on the column that is being designated as a foreign key. If\nI understand correctly, the scan done as the inner loop of the nested loop\nscan for the join is going to be your foreign key column, not your primary\nkey column. Thus, if you have no index on the foreign key column, you will\nbe forced to do a sequential table scan to do the join. In that case the\nhash-based join will almost certainly be faster (especially for such a large\nnumber of rows). If you put an index on the foreign key, then the inner scan\ncan be an index scan and that might turn out to be faster than building the\nhash indexes on all the table rows.\n\nSomebody can correct me if I'm wrong.\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing from\nour children, we're stealing from them--and it's not even considered to be a\ncrime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not\nlive to eat.) ~Marcus Tullius Cicero\n\nOn Thu, Apr 1, 2010 at 7:46 AM, Faheem Mitha <[email protected]> wrote: \nLooking at this more closely, idlink_id and anno_id are primary keys, so already have indexes on them, so my understanding (from the docs) is there is no purpose in creating them. That's why I removed the indexes that were there (back last August, actually, according to my logs). Anyway, doesn't look there is anything I can do here. Does anyone have additions or corrections to this?\nWhen you do a join, you typically have a foreign key in one table referencing a primary key in another table. While designating a foreign key does put a constraint on the key to ensure referential integrity, it does not put an index on the column that is being designated as a foreign key. If I understand correctly, the scan done as the inner loop of the nested loop scan for the join is going to be your foreign key column, not your primary key column. Thus, if you have no index on the foreign key column, you will be forced to do a sequential table scan to do the join. In that case the hash-based join will almost certainly be faster (especially for such a large number of rows). If you put an index on the foreign key, then the inner scan can be an index scan and that might turn out to be faster than building the hash indexes on all the table rows. \nSomebody can correct me if I'm wrong.-- Eliot Gable\"We do not inherit the Earth from our ancestors: we borrow it from our children.\" ~David Brower \n\"I decided the words were too conservative for me. We're not borrowing from our children, we're stealing from them--and it's not even considered to be a crime.\" ~David Brower\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; not live to eat.) ~Marcus Tullius Cicero",
"msg_date": "Thu, 1 Apr 2010 12:31:02 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\nHi Eliot,\n\nThanks for the comment.\n\nOn Thu, 1 Apr 2010, Eliot Gable wrote:\n\n> On Thu, Apr 1, 2010 at 7:46 AM, Faheem Mitha <[email protected]> wrote:\n\n> Looking at this more closely, idlink_id and anno_id are primary keys, so \n> already have indexes on them, so my understanding (from the docs) is \n> there is no purpose in creating them. That's why I removed the indexes \n> that were there (back last August, actually, according to my logs). \n> Anyway, doesn't look there is anything I can do here. Does anyone have \n> additions or corrections to this?\n\n> When you do a join, you typically have a foreign key in one table \n> referencing a primary key in another table. While designating a foreign \n> key does put a constraint on the key to ensure referential integrity, it \n> does not put an index on the column that is being designated as a \n> foreign key. If I understand correctly, the scan done as the inner loop \n> of the nested loop scan for the join is going to be your foreign key \n> column, not your primary key column. Thus, if you have no index on the \n> foreign key column, you will be forced to do a sequential table scan to \n> do the join. In that case the hash-based join will almost certainly be \n> faster (especially for such a large number of rows). If you put an index \n> on the foreign key, then the inner scan can be an index scan and that \n> might turn out to be faster than building the hash indexes on all the \n> table rows.\n\n> Somebody can correct me if I'm wrong.\n\nI had set the foreign keys in question (on the geno table) to be primary \nkeys. This is because this setup is basically a glorified spreadsheet, and \nI don't want more than one cell corresponding to a particular tuple of \nidlink.id and anno.id (the conceptual rows and cols). Since a primary key \ndefines an index, I thought putting indexes on idlink_id and anno_id was \nredundant. However, it looks like (unsurprisingly) the index corresponding \nto the primary key is across both columns, which may not be what is wanted \nfor the aforesaid join. Ie.\n\nALTER TABLE ONLY geno ADD CONSTRAINT geno_pkey PRIMARY KEY (idlink_id, anno_id)\n\n(As a side comment, with respect to the indexes on the other side of the \njoins, in one case, we have idlink.id = geno.idlink_id, and idlink.id is a \nprimary key too. In the other, namely geno.anno_id = \ndedup_patient_anno.id, dedup_patient_anno is a CTE, so no index on \ndedup_patient_anno.id. But maybe indexes aren't needed there.)\n\nHere is the join\n\n SELECT decode_genotype(geno.snpval_id, %(allelea)s, %(alleleb)s) AS g,\n geno.idlink_id, geno.anno_id\n FROM geno\n INNER JOIN dedup_patient_anno\n ON geno.anno_id = dedup_patient_anno.id\n INNER JOIN idlink\n ON geno.idlink_id = idlink.id\n ORDER BY idlink_id, anno_id\n\nHere is the table dump.\n\n****************************************************************\n-- Name: geno; Type: TABLE; Schema: hapmap; Owner: snp; Tablespace:\n--\nCREATE TABLE geno (\n idlink_id integer NOT NULL,\n anno_id integer NOT NULL,\n snpval_id integer NOT NULL\n)\nWITH (autovacuum_enabled=true);\n\nALTER TABLE hapmap.geno OWNER TO snp;\n--\n-- Name: geno_pkey; Type: CONSTRAINT; Schema: hapmap; Owner: snp; \nTablespace:\n--\nALTER TABLE ONLY geno\n ADD CONSTRAINT geno_pkey PRIMARY KEY (idlink_id, anno_id); (!!!!)\n--\n-- Name: geno_anno_id_fkey; Type: FK CONSTRAINT; Schema: hapmap; Owner: \nsnp\n--\nALTER TABLE ONLY geno\n ADD CONSTRAINT geno_anno_id_fkey FOREIGN KEY (anno_id) REFERENCES \nanno(id) ON UPDATE CASCADE ON DELETE CASCADE;\n--\n-- Name: geno_idlink_id_fkey; Type: FK CONSTRAINT; Schema: hapmap; Owner: \nsnp\n--\nALTER TABLE ONLY geno\n ADD CONSTRAINT geno_idlink_id_fkey FOREIGN KEY (idlink_id) REFERENCES \nidlink(id) ON UPDATE CASCADE ON DELETE CASCADE;\n--\n-- Name: geno_snpval_id_fkey; Type: FK CONSTRAINT; Schema: hapmap; Owner: \nsnp\n--\nALTER TABLE ONLY geno\n ADD CONSTRAINT geno_snpval_id_fkey FOREIGN KEY (snpval_id) REFERENCES \nsnpval(val) ON UPDATE CASCADE ON DELETE CASCADE;\n*************************************************************************\n\nSo, should I add indexes on the individual foreign key cols idlink_id\nand anno_id after all?\n\n Regards, Faheem.\n\n> --\n> Eliot Gable\n> \n> \"We do not inherit the Earth from our ancestors: we borrow it from our \n> children.\" ~David Brower\n\n> \"I decided the words were too conservative for me. We're not borrowing \n> from our children, we're stealing from them--and it's not even \n> considered to be a crime.\" ~David Brower\n\nNice quotes.\n\n> \"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to live; \n> not live to eat.) ~Marcus Tullius Cicero\n\n",
"msg_date": "Thu, 1 Apr 2010 23:45:13 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Thu, Apr 1, 2010 at 2:15 PM, Faheem Mitha <[email protected]> wrote:\n> I had set the foreign keys in question (on the geno table) to be primary\n> keys. This is because this setup is basically a glorified spreadsheet, and I\n> don't want more than one cell corresponding to a particular tuple of\n> idlink.id and anno.id (the conceptual rows and cols). Since a primary key\n> defines an index, I thought putting indexes on idlink_id and anno_id was\n> redundant. However, it looks like (unsurprisingly) the index corresponding\n> to the primary key is across both columns, which may not be what is wanted\n> for the aforesaid join\n\nActually it is what is wanted - that is good.\n\n> So, should I add indexes on the individual foreign key cols idlink_id\n> and anno_id after all?\n\nI doubt that would help.\n\nThe bottom line may be that you're dealing with hundreds of millions\nof rows here, so things are going to take a long time. Of course you\ncan always get more/faster memory, a bigger I/O subsystem, faster\nprocessors... and it could be that with detailed study there are\noptimizations that could be done even without spending money, but I\nthink I'm about tapped out on what I can do over an Internet mailing\nlist.\n\n...Robert\n",
"msg_date": "Thu, 1 Apr 2010 14:50:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Thu, 1 Apr 2010, Robert Haas wrote:\n\n> On Thu, Apr 1, 2010 at 2:15 PM, Faheem Mitha <[email protected]> wrote:\n\n>> I had set the foreign keys in question (on the geno table) to be \n>> primary keys. This is because this setup is basically a glorified \n>> spreadsheet, and I don't want more than one cell corresponding to a \n>> particular tuple of idlink.id and anno.id (the conceptual rows and \n>> cols). Since a primary key defines an index, I thought putting indexes \n>> on idlink_id and anno_id was redundant. However, it looks like \n>> (unsurprisingly) the index corresponding to the primary key is across \n>> both columns, which may not be what is wanted for the aforesaid join\n\n> Actually it is what is wanted - that is good.\n\nI see.\n\n>> So, should I add indexes on the individual foreign key cols idlink_id\n>> and anno_id after all?\n>\n> I doubt that would help.\n\nYou're sure of this?\n\n> The bottom line may be that you're dealing with hundreds of millions of \n> rows here, so things are going to take a long time. Of course you can \n> always get more/faster memory, a bigger I/O subsystem, faster \n> processors... and it could be that with detailed study there are \n> optimizations that could be done even without spending money, but I \n> think I'm about tapped out on what I can do over an Internet mailing \n> list.\n\nThanks for your assistance, Robert. It's been educational.\n\n Regards, Faheem.\n",
"msg_date": "Fri, 2 Apr 2010 00:31:09 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "On Thu, Apr 1, 2010 at 3:01 PM, Faheem Mitha <[email protected]> wrote:\n\n>\n> So, should I add indexes on the individual foreign key cols idlink_id\n>>> and anno_id after all?\n>>>\n>>\n>> I doubt that would help.\n>>\n>\n> You're sure of this?\n>\n>\n>\nIt is always best to test and be certain.\n\nOn Thu, Apr 1, 2010 at 3:01 PM, Faheem Mitha <[email protected]> wrote:\n\n\n\nSo, should I add indexes on the individual foreign key cols idlink_id\nand anno_id after all?\n\n\nI doubt that would help.\n\n\nYou're sure of this?It is always best to test and be certain.",
"msg_date": "Thu, 1 Apr 2010 15:29:17 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiments in query optimization"
},
{
"msg_contents": "\n\nOn Thu, 1 Apr 2010, Eliot Gable wrote:\n\n> \n> \n> On Thu, Apr 1, 2010 at 3:01 PM, Faheem Mitha <[email protected]> wrote:\n>\n> So, should I add indexes on the individual foreign key cols idlink_id\n> and anno_id after all?\n> \n>\n> I doubt that would help.\n> \n> \n> You're sure of this?\n> \n> \n> It is always best to test and be certain.\n\nFair enough. I may also try disabling hash joins and see what happens...\n\n Regards, Faheem.\n",
"msg_date": "Fri, 2 Apr 2010 01:02:37 +0530 (IST)",
"msg_from": "Faheem Mitha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: experiments in query optimization"
}
] |
[
{
"msg_contents": "Hello,\n\nWondering what's a good value for effective_io_concurrency when dealing with FusionIO drives...anyone have any experience with this?\n\nI know that SSDs vary from 10 channels to 30, and that 1 SSD about as fast as a 4-drive RAID, but I can't seem to settle on a good value to use for effective_io_concurrency. Has anyone done any performance testing/metrics with the value? Any recommendations or thoughts?\n\n--Richard",
"msg_date": "Thu, 25 Mar 2010 15:19:34 -0700",
"msg_from": "Richard Yen <[email protected]>",
"msg_from_op": true,
"msg_subject": "good effective_io_concurrency for FusionIO drives?"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWe've recently encountered some swapping issues on our CentOS 64GB Nehalem machine, running postgres 8.4.2. Unfortunately, I was foolish enough to set shared_buffers to 40GB. I was wondering if anyone would have any insight into why the swapping suddenly starts, but never recovers?\n\n<img src=\"http://richyen.com/i/swap.png\">\n\nNote, the machine has been up and running since mid-December 2009. It was only a March 8 that this swapping began, and it's never recovered.\n\nIf we look at dstat, we find the following:\n\n<img src=\"http://richyen.com/i/dstat.png\">\n\nNote that it is constantly paging in, but never paging out. This would indicate that it's constantly reading from swap, but never writing out to it. Why would postgres do this? (postgres is pretty much the only thing running on this machine).\n\nI'm planning on lowering the shared_buffers to a more sane value, like 25GB (pgtune recommends this for a Mixed-purpose machine) or less (pgtune recommends 14GB for an OLTP machine). However, before I do this (and possibly resolve the issue), I was hoping to see if anyone would have an explanation for the constant reading from swap, but never writing back.\n\n--Richard",
"msg_date": "Fri, 26 Mar 2010 16:57:23 -0700",
"msg_from": "Richard Yen <[email protected]>",
"msg_from_op": true,
"msg_subject": "why does swap not recover?"
},
{
"msg_contents": "On Fri, Mar 26, 2010 at 5:57 PM, Richard Yen <[email protected]> wrote:\n> Hi everyone,\n>\n> We've recently encountered some swapping issues on our CentOS 64GB Nehalem\n\nWhat version Centos? How up to date is it? Are there any other\nsettings that aren't defaults in things like /etc/sysctl.conf?\n",
"msg_date": "Fri, 26 Mar 2010 18:05:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why does swap not recover?"
},
{
"msg_contents": "On Mar 26, 2010, at 4:57 PM, Richard Yen wrote:\n\n> Hi everyone,\n> \n> We've recently encountered some swapping issues on our CentOS 64GB Nehalem machine, running postgres 8.4.2. Unfortunately, I was foolish enough to set shared_buffers to 40GB. I was wondering if anyone would have any insight into why the swapping suddenly starts, but never recovers?\n> \n> <img src=\"http://richyen.com/i/swap.png\">\n> \n> Note, the machine has been up and running since mid-December 2009. It was only a March 8 that this swapping began, and it's never recovered.\n> \n> If we look at dstat, we find the following:\n> \n> <img src=\"http://richyen.com/i/dstat.png\">\n> \n> Note that it is constantly paging in, but never paging out. This would indicate that it's constantly reading from swap, but never writing out to it. Why would postgres do this? (postgres is pretty much the only thing running on this machine).\n> \n> I'm planning on lowering the shared_buffers to a more sane value, like 25GB (pgtune recommends this for a Mixed-purpose machine) or less (pgtune recommends 14GB for an OLTP machine). However, before I do this (and possibly resolve the issue), I was hoping to see if anyone would have an explanation for the constant reading from swap, but never writing back.\n\nLinux until recently does not account for shared memory properly in its swap 'aggressiveness' decisions.\nSetting shared_buffers larger than 35% is asking for trouble.\n\nYou could try adjusting the 'swappiness' setting on the fly and seeing how it reacts, but one consequence of that is trading off disk swapping for kswapd using up tons of CPU causing other trouble.\n\nEither use one of the last few kernel versions (I forget which addressed the memory accounting issues, and haven't tried it myself), or turn shared_buffers down. I recommend trying 10GB or so to start.\n\n> \n> --Richard\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 26 Mar 2010 17:25:34 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why does swap not recover?"
},
{
"msg_contents": "On 3/26/10 4:57 PM, Richard Yen wrote:\n> Hi everyone,\n>\n> We've recently encountered some swapping issues on our CentOS 64GB Nehalem machine, running postgres 8.4.2. Unfortunately, I was foolish enough to set shared_buffers to 40GB. I was wondering if anyone would have any insight into why the swapping suddenly starts, but never recovers?\n>\n> <img src=\"http://richyen.com/i/swap.png\">\n>\n> Note, the machine has been up and running since mid-December 2009. It was only a March 8 that this swapping began, and it's never recovered.\n>\n> If we look at dstat, we find the following:\n>\n> <img src=\"http://richyen.com/i/dstat.png\">\n>\n> Note that it is constantly paging in, but never paging out.\n\nThis happens when you have too many processes using too much space to fit in real memory, but none of them are changing their memory image. If the system swaps a process in, but that process doesn't change anything in memory, then there are no dirty pages and the kernel can just kick the process out of memory without writing anything back to the swap disk -- the data in the swap are still valid.\n\nIt's a classic problem when processes are running round-robin. Say you have space for 100 processes, but you're running 101 process. When you get to the #101, #1 is the oldest so it swaps out. Then #1 runs, and #2 is the oldest, so it gets kicked out. Then #2 runs and kicks out #3 ... and so forth. Going from 100 to 101 process brings the system nearly to a halt.\n\nSome operating systems try to use tricks to keep this from happening, but it's a hard problem to solve.\n\nCraig\n",
"msg_date": "Fri, 26 Mar 2010 17:25:41 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why does swap not recover?"
},
{
"msg_contents": "\nOn Mar 26, 2010, at 5:25 PM, Scott Carey wrote:\n> Linux until recently does not account for shared memory properly in its swap 'aggressiveness' decisions.\n> Setting shared_buffers larger than 35% is asking for trouble.\n> \n> You could try adjusting the 'swappiness' setting on the fly and seeing how it reacts, but one consequence of that is trading off disk swapping for kswapd using up tons of CPU causing other trouble.\nThanks for the tip. I believe we've tried tuning the 'swappiness' setting on the fly, but it had no effect. We're hypothesizing that perhaps 'swappiness' only comes into effect at the beginning of a process, so we would have to restart the daemon to actually make it go into effect--would you know about this?\n\n> Either use one of the last few kernel versions (I forget which addressed the memory accounting issues, and haven't tried it myself), or turn shared_buffers down. I recommend trying 10GB or so to start.\n\nWe're currently using CentOS 2.6.18-164.6.1.el5 with all the default settings. If this is after the one that dealt with memory accounting issues, I agree that I'll likely have to lower my shared_buffers.\n\nMy sysctl.conf shows the following:\n> kernel.msgmnb = 65536\n> kernel.msgmax = 65536\n> kernel.shmmax = 68719476736\n> kernel.shmall = 4294967296\n\nBTW, I forgot to mention that I'm using FusionIO drives for my data storage, but I'm pretty sure this is not relevant to the issue I'm having.\n\nThanks for the help!\n--Richard",
"msg_date": "Sat, 27 Mar 2010 21:08:13 -0700",
"msg_from": "Richard Yen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: why does swap not recover?"
},
{
"msg_contents": "On 3/26/10 4:57 PM, Richard Yen wrote:\n> I'm planning on lowering the shared_buffers to a more sane value, like 25GB (pgtune recommends this for a Mixed-purpose machine) or less (pgtune recommends 14GB for an OLTP machine). However, before I do this (and possibly resolve the issue), I was hoping to see if anyone would have an explanation for the constant reading from swap, but never writing back.\n\nPostgres does not control how swap is used. This would be an operating\nsystem issue. Leaving aside the distict possibility of a bug in\nhandling swap (nobody seems to do it well), there's the distinct\npossibility that you're actually pinning more memory on the system than\nit has (through various processes) and it's wisely shifted some\nread-only files to the swap (as opposed to read-write ones). But that's\na fairly handwavy guess.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 29 Mar 2010 23:18:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why does swap not recover?"
},
{
"msg_contents": "On Fri, Mar 26, 2010 at 7:57 PM, Richard Yen <[email protected]> wrote:\n> Note that it is constantly paging in, but never paging out. This would indicate that it's constantly reading from swap, but never writing out to it. Why would postgres do this? (postgres is pretty much the only thing running on this machine).\n>\n> I'm planning on lowering the shared_buffers to a more sane value, like 25GB (pgtune recommends this for a Mixed-purpose machine) or less (pgtune recommends 14GB for an OLTP machine). However, before I do this (and possibly resolve the issue), I was hoping to see if anyone would have an explanation for the constant reading from swap, but never writing back.\n\nReading a page in from swap still leaves that data on the disk. So it\nmay be that you're reading in pages from disk, not modifying them,\ndiscarding them (without any need to write them out since they're\nstill on disk), and then reading them in again when they're accessed\nagain.\n\n...Robert\n",
"msg_date": "Tue, 30 Mar 2010 10:57:44 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why does swap not recover?"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nWe're using PostgreSQL 8.2. Recently, in our production database, there was\na severe performance impact.. Even though, we're regularly doing both:\n\n1. VACUUM FULL ANALYZE once in a week during low-usage time and\n\n2. ANALYZE everyday at low-usage time\n\n \n\nAlso, we noticed that the physical database size has grown upto 30 GB. But,\nif I dump the database in the form of SQL and import it locally in my\nmachine, it was only 3.2 GB. Then while searching in Google to optimize\ndatabase size, I found the following useful link:\n\n \n\nhttp://www.linuxinsight.com/optimize_postgresql_database_size.html\n\n \n\nIt says that even vacuumdb or reindexdb doesn't really compact database\nsize, only dump/restore does because of MVCC architecture feature in\nPostgreSQL and this has been proven here.\n\n \n\nSo, finally we decided to took our production database offline and performed\ndump/restore. After this, the physical database size has also reduced from\n30 GB to 3.5 GB and the performance was also very good than it was before.\n\n \n\nPhysical database size was found using the following command:\n\ndu -sh /usr/local/pgsql/data/base/<database-oid>\n\n \n\nI also cross-checked this size using\n\"pg_size_pretty(pg_database_size(datname))\".\n\n \n\nQuestions\n\n1. Is there any version/update of PostgreSQL addressing this issue?\n\n2. How in real time, this issues are handled by other PostgreSQL users\nwithout taking to downtime?\n\n3. Any ideas or links whether this is addressed in upcoming PostgreSQL\nversion 9.0 release?\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nWe're using PostgreSQL 8.2. Recently, in our\nproduction database, there was a severe performance impact.. Even though,\nwe're regularly doing both:\n1. \nVACUUM FULL ANALYZE once in a week during low-usage\ntime and\n2. \nANALYZE everyday at low-usage time\n \nAlso, we noticed that the physical database size has grown\nupto 30 GB. But, if I dump the database in the form of SQL and import it\nlocally in my machine, it was only 3.2 GB. Then while searching in Google\nto optimize database size, I found the following useful link:\n \nhttp://www.linuxinsight.com/optimize_postgresql_database_size.html\n \nIt says that even vacuumdb or reindexdb doesn't really\ncompact database size, only dump/restore does because of MVCC architecture feature\nin PostgreSQL and this has been proven here.\n \nSo, finally we decided to took our production database\noffline and performed dump/restore. After this, the physical database\nsize has also reduced from 30 GB to 3.5 GB and the performance was also\nvery good than it was before.\n \nPhysical database size was found using the following\ncommand:\ndu -sh /usr/local/pgsql/data/base/<database-oid>\n \nI also cross-checked this size using\n\"pg_size_pretty(pg_database_size(datname))\".\n \nQuestions\n1. \nIs there any version/update of PostgreSQL addressing\nthis issue?\n2. \nHow in real time, this issues are handled by other\nPostgreSQL users without taking to downtime?\n3. \nAny ideas or links whether this is addressed in\nupcoming PostgreSQL version 9.0 release?",
"msg_date": "Sat, 27 Mar 2010 18:30:09 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database size growing over time and leads to performance impact"
},
{
"msg_contents": "> 1. VACUUM FULL ANALYZE once in a week during low-usage time and\n\nVACUUM FULL compacts tables, but tends to bloat indexes. Running it weekly \nis NOT RECOMMENDED.\n\nA correctly configured autovacuum (or manual vacuum in some circumstances) \nshould maintain your DB healthy and you shouldn't need VACUUM FULL.\n\nIf you realize you got a bloat problem, for instance due to a \nmisconfigured vacuum, use CLUSTER, which re-generates table AND index \ndata, and besides, having your table clustered on an index of your choice \ncan boost performance quite a lot in some circumstances.\n\n8.2 is so old I don't remember if autovacuum is even included. Please try \nupgrading to the latest version...\n\nSince your database probably fits in RAM, CLUSTER will be pretty fast.\nYou can schedule it weekly, if you need clustering. If you don't, \nautovacuum will suffice.\nHint : add a \"SELECT count(*) FROM yourtable;\" before \"CLUSTER yourtable;\" \nso that the table is pulled in the OS disk cache, it'll make CLUSTER \nfaster.\n\n",
"msg_date": "Sat, 27 Mar 2010 14:35:15 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to\n performance impact"
},
{
"msg_contents": "On 03/27/2010 08:00 AM, Gnanakumar wrote:\n> Hi,\n>\n> We're using PostgreSQL 8.2. Recently, in our production database, there\n> was a severe performance impact.. Even though, we're regularly doing both:\n>\n> 1. VACUUM FULL ANALYZE once in a week during low-usage time and\n>\n> 2. ANALYZE everyday at low-usage time\n>\n> Also, we noticed that the physical database size has grown upto 30 GB.\n> But, if I dump the database in the form of SQL and import it locally in\n> my machine, it was only 3.2 GB. Then while searching in Google to\n> optimize database size, I found the following useful link:\n>\n> http://www.linuxinsight.com/optimize_postgresql_database_size.html\n>\n> It says that even vacuumdb or reindexdb doesn't really compact database\n> size, only dump/restore does because of MVCC architecture feature in\n> PostgreSQL and this has been proven here.\n>\n> So, finally we decided to took our production database offline and\n> performed dump/restore. After this, the physical database size has also\n> reduced from 30 GB to 3.5 GB and the performance was also very good than\n> it was before.\n>\n> Physical database size was found using the following command:\n>\n> du -sh /usr/local/pgsql/data/base/<database-oid>\n>\n> I also cross-checked this size using\n> \"pg_size_pretty(pg_database_size(datname))\".\n>\n> Questions\n>\n> 1. Is there any version/update of PostgreSQL addressing this issue?\n>\n> 2. How in real time, this issues are handled by other PostgreSQL users\n> without taking to downtime?\n>\n> 3. Any ideas or links whether this is addressed in upcoming PostgreSQL\n> version 9.0 release?\n>\n\nThe \"issue\" is not with PG's. Any newer version of PG will act exactly the same. I don't think you understand. Vacuum is not meant to reduce size of the db, its meant to mark pages for reuse. VACUUM FULL is almost never needed. The fact it didnt reduce your db size is probably because of something else, like an open transaction. If you have a transaction left open, then your db will never be able to shrink or re-use pages. You'd better fix that issue first. (run ps -ax|grep postgres and look for \"idle in transaction\")\n\nYou need to vacuum way more often than once a week. Just VACUUM ANALYZE, two, three times a day. Or better yet, let autovacuum do its thing. (if you do have autovacuum enabled, then the only problem is the open transaction thing).\n\nDont \"VACUUM FULL\", its not helping you, and is being removed in newer versions.\n\n-Andy\n",
"msg_date": "Sat, 27 Mar 2010 08:35:42 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance\n impact"
},
{
"msg_contents": "Le 27/03/2010 14:00, Gnanakumar a �crit :\n> [...]\n> We're using PostgreSQL 8.2. Recently, in our production database, there was\n> a severe performance impact.. Even though, we're regularly doing both:\n> \n> 1. VACUUM FULL ANALYZE once in a week during low-usage time and\n> \n> 2. ANALYZE everyday at low-usage time\n> \n\nWhich means you can be sure you have bloated indexes.\n\n> Also, we noticed that the physical database size has grown upto 30 GB. But,\n> if I dump the database in the form of SQL and import it locally in my\n> machine, it was only 3.2 GB. Then while searching in Google to optimize\n> database size, I found the following useful link:\n> \n> http://www.linuxinsight.com/optimize_postgresql_database_size.html\n> \n> It says that even vacuumdb or reindexdb doesn't really compact database\n> size, only dump/restore does because of MVCC architecture feature in\n> PostgreSQL and this has been proven here.\n> \n\nVACUUM doesn't compact a database. VACUUM FULL does for tables. REINDEX\ndoes for index.\n\nAnd this is why, I think, you have an issue. You do VACUUM FULL each\nweek, but don't do a REINDEX.\n\n> So, finally we decided to took our production database offline and performed\n> dump/restore. After this, the physical database size has also reduced from\n> 30 GB to 3.5 GB and the performance was also very good than it was before.\n> \n\nNot surprising, indexes are recreated.\n\n> Physical database size was found using the following command:\n> \n> du -sh /usr/local/pgsql/data/base/<database-oid>\n> \n> I also cross-checked this size using\n> \"pg_size_pretty(pg_database_size(datname))\".\n> \n> Questions\n> \n> 1. Is there any version/update of PostgreSQL addressing this issue?\n> \n\nIf you still want to use VACUUM FULL, then you need to use REINDEX. But\nyou shouldn't need VACUUM FULL. Configure autovacuum so that your tables\ndon't get bloated.\n\n> 2. How in real time, this issues are handled by other PostgreSQL users\n> without taking to downtime?\n> \n\nUsing the autovacuum to VACUUM and ANALYZE when it's really needed.\n\n> 3. Any ideas or links whether this is addressed in upcoming PostgreSQL\n> version 9.0 release?\n> \n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n",
"msg_date": "Sat, 27 Mar 2010 15:41:23 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance\n impact"
},
{
"msg_contents": "Please don't cc two of the lists here. It makes things difficult for \nusers who only subscribe to one list or the other who reply--their post \nto the other list will be held for moderation. And that's a pain for \nthe moderators too. In this case, either the pgsql-admin or \npgsql-performance list would have been appropriate for this question, \nbut not both at the same time. The suggested approach when unsure is to \ntry the most obvious list, and if you don't get a response after a day \nor two then try a second one.\n\nGnanakumar wrote:\n>\n> We're using PostgreSQL 8.2. Recently, in our production database, \n> there was a severe performance impact.. Even though, we're regularly \n> doing both:\n>\n> 1. VACUUM FULL ANALYZE once in a week during low-usage time and\n>\n> 2. ANALYZE everyday at low-usage time\n> \n>\n> Also, we noticed that the physical database size has grown upto 30 \n> GB. But, if I dump the database in the form of SQL and import it \n> locally in my machine, it was only 3.2 GB.\n>\n\nMost VACUUM problems are caused by not running VACUUM often enough. A \nweekly VACUUM is really infrequent. And it's rarely ever a good idea to \nrun VACUUM FULL.\n\nYou should switch over to running a regular VACUUM, not a full one, on \nsomething closer to a daily or more frequent basis instead.\n\n> Then while searching in Google to optimize database size, I found the \n> following useful link: \n>\n> http://www.linuxinsight.com/optimize_postgresql_database_size.html\n>\n> It says that even vacuumdb or reindexdb doesn't really compact \n> database size, only dump/restore does because of MVCC architecture \n> feature in PostgreSQL and this has been proven here.\n>\n\nThat article covers PostgreSQL as of V7.4, and much of it is outdated \ninformation that doesn't apply to the 8.2 you're running. It's a pretty \nbad description even of that version. You should try to forget \neverything you read there and instead look at \nhttp://www.postgresql.org/docs/8.2/interactive/maintenance.html for an \naccurate introduction to this topic. I'm sorry you've been misled by it.\n\n> Physical database size was found using the following command:\n>\n> du -sh /usr/local/pgsql/data/base/<database-oid> \n>\n> I also cross-checked this size using \n> \"pg_size_pretty(pg_database_size(datname))\".\n>\n\nYou should use the queries shown at \nhttp://wiki.postgresql.org/wiki/Disk_Usage instead of this, which will \nbreak down where the disk space is going by table and index. You will \ndiscover one of two things:\n\n1) As the database grows, most of the disk space is being taken up by \nthe tables themselves. In this case, a more frequent VACUUM is likely \nto make that go away. You might also need to bump up one of the \nparameters in the postgresql.conf file, max_fsm_pages\n\n2) Lots of disk space is being taken up by indexes on the tables. If \nthis is the case, the fact that you're running VACUUM FULL all the time \nis the likely cause of your problem.\n\n\n> Questions\n>\n> 1. Is there any version/update of PostgreSQL addressing this issue?\n>\n> 2. How in real time, this issues are handled by other PostgreSQL \n> users without taking to downtime?\n>\n> 3. Any ideas or links whether this is addressed in upcoming \n> PostgreSQL version 9.0 release?\n>\n> \n>\n\nPostgreSQL 8.3 turns on a better tuned autovacuum by default so that \nit's more likely VACUUM will run often enough to keep the problem you're \nhaving from happening. 8.4 removes an additional source of problems \nthat can cause VACUUM to stop working. As of 8.4, most of the problems \nin this area are gone in the default configuration. Just looking at \nnewer versions of the associated documentation will give you an idea \nwhat's changed; \nhttp://www.postgresql.org/docs/current/interactive/maintenance.html is \nthe 8.4 version. The problems with VACUUM FULL are so bad that as of \n9.0, the old implementation of that (the one you're probably getting bad \nbehavior from) has been replaced by a more efficient one.\n\nThe main situation newer PostgreSQL versions can still run into a \nproblem where the indexes get large if you're deleting records in some \nways; http://www.postgresql.org/docs/8.2/static/routine-reindex.html \ndescribes that issue, and that bit of documentation and the underlying \nbehavior is unchanged in later releases. It's much more likely that \nyou're running into the very common situation instead where you're \nrunning VACUUM FULL infrequently, where you should be running regular \nVACUUM frequently instead.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 29 Mar 2010 00:12:53 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Database size growing over time and leads to\n performance\n\timpact"
},
{
"msg_contents": "Pierre C wrote:\n> If you realize you got a bloat problem, for instance due to a \n> misconfigured vacuum, use CLUSTER, which re-generates table AND index \n> data, and besides, having your table clustered on an index of your \n> choice can boost performance quite a lot in some circumstances.\n>\n> 8.2 is so old I don't remember if autovacuum is even included. Please \n> try upgrading to the latest version...\n\nIn 8.2, it's included, but not turned on by default. And it can only \nhave a single autovacuum worker, which limits its ability to keep up \nwith more difficult workloads.\n\nAs for CLUSTER, the implementation in 8.2 is limited compared to the 8.3 \none. If you look at \nhttp://www.postgresql.org/docs/8.2/static/sql-cluster.html you'll see a \nscary paragraph starting with \"CLUSTER loses all visibility information \nof tuples...\" that is missing from later versions, because that problem \nwas fixed in 8.3. I try to avoid using CLUSTER on 8.2 or earlier \nversions unless I can block all clients during the maintenance window \nit's running in.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 29 Mar 2010 00:21:34 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance\n impact"
},
{
"msg_contents": "We're using pgpool-II version 2.0.1 for PostgreSQL connection management.\n\npgpool configurations are:\nnum_init_children = 450\nchild_life_time = 300\nconnection_life_time = 120\nchild_max_connections = 30\n\nAs you recommended, I ran \"ps -ax|grep postgres\" at almost a busy\ntransaction time and I can find \"idle\" entries:\n[root@newuser ~]# ps -ax|grep postgres\n 2664 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43545) idle\n 2783 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43585) idle\n 2806 ? Ss 0:02 postgres: newuser mydb 192.168.0.200(43588) idle\n 2807 ? Ss 0:01 postgres: newuser mydb 192.168.0.200(43589) idle\n 2818 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43601) idle\n 2819 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43602) idle\n 2833 ? Ss 0:02 postgres: newuser mydb 192.168.0.200(43603) idle\n 2856 ? Ss 0:03 postgres: newuser mydb 192.168.0.200(43614) idle\n\nBased on pgpool documentation, and also as far as I know, even though\napplication layer returns/closes the application, pgpool will only handle\nactual closing of connections based on the connection_life_time parameter\ndefined. And if this timeout, it goes to \"wait for connection request\"\nstate.\n\nCan you throw some light on this? Is there any better way that we need to\nre-configure our pgpool parameters?\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]] \nSent: Saturday, March 27, 2010 7:06 PM\nTo: Gnanakumar; [email protected]\nSubject: Re: [PERFORM] Database size growing over time and leads to\nperformance impact\n\nOn 03/27/2010 08:00 AM, Gnanakumar wrote:\n> Hi,\n>\n> We're using PostgreSQL 8.2. Recently, in our production database, there\n> was a severe performance impact.. Even though, we're regularly doing both:\n>\n> 1. VACUUM FULL ANALYZE once in a week during low-usage time and\n>\n> 2. ANALYZE everyday at low-usage time\n>\n> Also, we noticed that the physical database size has grown upto 30 GB.\n> But, if I dump the database in the form of SQL and import it locally in\n> my machine, it was only 3.2 GB. Then while searching in Google to\n> optimize database size, I found the following useful link:\n>\n> http://www.linuxinsight.com/optimize_postgresql_database_size.html\n>\n> It says that even vacuumdb or reindexdb doesn't really compact database\n> size, only dump/restore does because of MVCC architecture feature in\n> PostgreSQL and this has been proven here.\n>\n> So, finally we decided to took our production database offline and\n> performed dump/restore. After this, the physical database size has also\n> reduced from 30 GB to 3.5 GB and the performance was also very good than\n> it was before.\n>\n> Physical database size was found using the following command:\n>\n> du -sh /usr/local/pgsql/data/base/<database-oid>\n>\n> I also cross-checked this size using\n> \"pg_size_pretty(pg_database_size(datname))\".\n>\n> Questions\n>\n> 1. Is there any version/update of PostgreSQL addressing this issue?\n>\n> 2. How in real time, this issues are handled by other PostgreSQL users\n> without taking to downtime?\n>\n> 3. Any ideas or links whether this is addressed in upcoming PostgreSQL\n> version 9.0 release?\n>\n\nThe \"issue\" is not with PG's. Any newer version of PG will act exactly the\nsame. I don't think you understand. Vacuum is not meant to reduce size of\nthe db, its meant to mark pages for reuse. VACUUM FULL is almost never\nneeded. The fact it didnt reduce your db size is probably because of\nsomething else, like an open transaction. If you have a transaction left\nopen, then your db will never be able to shrink or re-use pages. You'd\nbetter fix that issue first. (run ps -ax|grep postgres and look for \"idle\nin transaction\")\n\nYou need to vacuum way more often than once a week. Just VACUUM ANALYZE,\ntwo, three times a day. Or better yet, let autovacuum do its thing. (if\nyou do have autovacuum enabled, then the only problem is the open\ntransaction thing).\n\nDont \"VACUUM FULL\", its not helping you, and is being removed in newer\nversions.\n\n-Andy\n\n",
"msg_date": "Tue, 30 Mar 2010 16:47:42 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database size growing over time and leads to performance impact"
},
{
"msg_contents": "On 3/30/2010 6:17 AM, Gnanakumar wrote:\n> We're using pgpool-II version 2.0.1 for PostgreSQL connection management.\n>\n> pgpool configurations are:\n> num_init_children = 450\n> child_life_time = 300\n> connection_life_time = 120\n> child_max_connections = 30\n>\n> As you recommended, I ran \"ps -ax|grep postgres\" at almost a busy\n> transaction time and I can find \"idle\" entries:\n> [root@newuser ~]# ps -ax|grep postgres\n> 2664 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43545) idle\n> 2783 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43585) idle\n> 2806 ? Ss 0:02 postgres: newuser mydb 192.168.0.200(43588) idle\n> 2807 ? Ss 0:01 postgres: newuser mydb 192.168.0.200(43589) idle\n> 2818 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43601) idle\n> 2819 ? Ss 0:00 postgres: newuser mydb 192.168.0.200(43602) idle\n> 2833 ? Ss 0:02 postgres: newuser mydb 192.168.0.200(43603) idle\n> 2856 ? Ss 0:03 postgres: newuser mydb 192.168.0.200(43614) idle\n>\n> Based on pgpool documentation, and also as far as I know, even though\n> application layer returns/closes the application, pgpool will only handle\n> actual closing of connections based on the connection_life_time parameter\n> defined. And if this timeout, it goes to \"wait for connection request\"\n> state.\n>\n> Can you throw some light on this? Is there any better way that we need to\n> re-configure our pgpool parameters?\n>\n\nConnections are ok. Connection is different than transaction. The \noutput above looks good, that's what you want to see. (If it had said \n\"idle in transaction\" that would be a problem). I dont think you need \nto change anything.\n\nHopefully just vacuuming more often will help.\n\n-Andy\n\n",
"msg_date": "Tue, 30 Mar 2010 08:50:14 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance\n impact"
},
{
"msg_contents": "\nOn Mar 27, 2010, at 6:35 AM, Andy Colson wrote:\n> \n> Dont \"VACUUM FULL\", its not helping you, and is being removed in newer versions.\n> \n\nOff topic: How is that going to work? CLUSTER doesn't work on tables without an index. I would love to be able to CLUSTER on some column set that doesn't necessarily have an index.\n\n> -Andy\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 31 Mar 2010 13:37:15 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to\n performance impact"
},
{
"msg_contents": "On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey <[email protected]> wrote:\n> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:\n>>\n>> Dont \"VACUUM FULL\", its not helping you, and is being removed in newer versions.\n>>\n>\n> Off topic: How is that going to work? CLUSTER doesn't work on tables without an index. I would love to be able to CLUSTER on some column set that doesn't necessarily have an index.\n\nI believe the new VF implementation just rewrites the data in the same\nphysical order as it was in previously, but without the dead space.\nSo it's sort of like cluster-by-no-index-at-all.\n\n...Robert\n",
"msg_date": "Wed, 31 Mar 2010 16:47:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance impact"
},
{
"msg_contents": "Scott Carey wrote:\n> \n> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:\n> > \n> > Dont \"VACUUM FULL\", its not helping you, and is being removed in newer versions.\n> > \n> \n> Off topic: How is that going to work? CLUSTER doesn't work on tables\n> without an index. I would love to be able to CLUSTER on some column\n> set that doesn't necessarily have an index.\n\nVACUUM FULL has been rewritten in 9.0 so that it uses the CLUSTER logic,\nexcept that it doesn't require an index.\n\nIf you want to do it in earlier versions, you can use a no-op SET TYPE\ncommand, like so:\n\nALTER TABLE foo ALTER COLUMN bar SET TYPE baz;\n\nassuming that table foo has a column bar which is already of type baz.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 31 Mar 2010 17:47:57 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to\n performance impact"
},
{
"msg_contents": "\nOn Mar 31, 2010, at 1:47 PM, Robert Haas wrote:\n\n> On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey <[email protected]> wrote:\n>> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:\n>>> \n>>> Dont \"VACUUM FULL\", its not helping you, and is being removed in newer versions.\n>>> \n>> \n>> Off topic: How is that going to work? CLUSTER doesn't work on tables without an index. I would love to be able to CLUSTER on some column set that doesn't necessarily have an index.\n> \n> I believe the new VF implementation just rewrites the data in the same\n> physical order as it was in previously, but without the dead space.\n> So it's sort of like cluster-by-no-index-at-all.\n> \n\nStill off topic:\n\nWill CLUSTER/VF respect FILLFACTOR in 9.0?\n\nAs far as I can tell in 8.4, it does not. CLUSTER on a table with FILLFACTOR=100, then alter the table to FILLFACTOR=90, cluster again -- the file size reported by \\dt+ is the same. This is a fairly big performance issue since it means that HOT doesn't function well on a table just CLUSTERed.\n\n> ...Robert\n\n",
"msg_date": "Thu, 1 Apr 2010 13:08:46 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance impact"
},
{
"msg_contents": "Scott Carey <[email protected]> writes:\n> Still off topic:\n\n> Will CLUSTER/VF respect FILLFACTOR in 9.0?\n\n> As far as I can tell in 8.4, it does not.\n\nWorks for me, in both branches.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Apr 2010 16:42:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance impact "
},
{
"msg_contents": "\nOn Apr 1, 2010, at 1:42 PM, Tom Lane wrote:\n\n> Scott Carey <[email protected]> writes:\n>> Still off topic:\n> \n>> Will CLUSTER/VF respect FILLFACTOR in 9.0?\n> \n>> As far as I can tell in 8.4, it does not.\n> \n> Works for me, in both branches.\n> \n\nI stand corrected. I must have done something wrong in my test. On a different system I tried FILLFACTOR=45 and FILLFACTOR=90 and the resulting size was nearly a factor of two different.\n\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 1 Apr 2010 14:00:24 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database size growing over time and leads to performance impact "
}
] |
[
{
"msg_contents": "You may want to consider performing more frequent vacuums a week or really considering leveraging autovacuum if it makes sense to your transactions volume. \n\nRegards,\n Husam \n\n-----Original Message-----\nFrom: Gnanakumar <[email protected]>\nSent: Saturday, March 27, 2010 6:06 AM\nTo: [email protected] <[email protected]>; [email protected] <[email protected]>\nSubject: [ADMIN] Database size growing over time and leads to performance impact\n\nHi,\n\n \n\nWe're using PostgreSQL 8.2. Recently, in our production database, there was\na severe performance impact.. Even though, we're regularly doing both:\n\n1. VACUUM FULL ANALYZE once in a week during low-usage time and\n\n2. ANALYZE everyday at low-usage time\n\n \n\nAlso, we noticed that the physical database size has grown upto 30 GB. But,\nif I dump the database in the form of SQL and import it locally in my\nmachine, it was only 3.2 GB. Then while searching in Google to optimize\ndatabase size, I found the following useful link:\n\n \n\nhttp://www.linuxinsight.com/optimize_postgresql_database_size.html\n\n \n\nIt says that even vacuumdb or reindexdb doesn't really compact database\nsize, only dump/restore does because of MVCC architecture feature in\nPostgreSQL and this has been proven here.\n\n \n\nSo, finally we decided to took our production database offline and performed\ndump/restore. After this, the physical database size has also reduced from\n30 GB to 3.5 GB and the performance was also very good than it was before.\n\n \n\nPhysical database size was found using the following command:\n\ndu -sh /usr/local/pgsql/data/base/<database-oid>\n\n \n\nI also cross-checked this size using\n\"pg_size_pretty(pg_database_size(datname))\".\n\n \n\nQuestions\n\n1. Is there any version/update of PostgreSQL addressing this issue?\n\n2. How in real time, this issues are handled by other PostgreSQL users\nwithout taking to downtime?\n\n3. Any ideas or links whether this is addressed in upcoming PostgreSQL\nversion 9.0 release?\n\n \n\n****************************************************************************************** \nThis message may contain confidential or proprietary information intended only for the use of the \naddressee(s) named above or may contain information that is legally privileged. If you are \nnot the intended addressee, or the person responsible for delivering it to the intended addressee, \nyou are hereby notified that reading, disseminating, distributing or copying this message is strictly \nprohibited. If you have received this message by mistake, please immediately notify us by \nreplying to the message and delete the original message and any copies immediately thereafter. \n\nThank you. \n****************************************************************************************** \nFACLD\n\n",
"msg_date": "Sat, 27 Mar 2010 06:47:53 -0700",
"msg_from": "\"Tomeh, Husam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database size growing over time and leads to performance impact"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using pgbench for running tests on PostgreSQL.\n\nI have a few questions;\n1) For calculating time to get the TPS, is pgbench using the wall \nclock time or cpu time?\n2)How is TPS calculated?\n\nThanks in advance,\nReydan\n",
"msg_date": "Sat, 27 Mar 2010 22:05:59 +0200",
"msg_from": "Reydan Cankur <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pgbench TPS Calculation"
},
{
"msg_contents": "Reydan Cankur wrote:\n> 1) For calculating time to get the TPS, is pgbench using the wall \n> clock time or cpu time?\n> 2) How is TPS calculated?\n\nWall clock time.\n\nTPS=transactions processed / (end time - start time)\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 29 Mar 2010 00:22:18 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pgbench TPS Calculation"
}
] |
[
{
"msg_contents": "Hi All,\n\nExample on optimizer\n===============\npostgres=# create table test(id int);\nCREATE TABLE\npostgres=# insert into test VALUES (1);\nINSERT 0 1\npostgres=# select * from test;\n id\n----\n 1\n(1 row)\npostgres=# explain select * from test;\n QUERY PLAN\n--------------------------------------------------------\n Seq Scan on test (cost=0.00..34.00 *rows=2400* width=4)\n(1 row)\nIn the above, example the optimizer is retreiving those many rows where\nthere is only one row in that table. If i analyze am geting one row.\n\npostgres=# ANALYZE test;\nANALYZE\npostgres=# explain select * from test;\n QUERY PLAN\n----------------------------------------------------\n Seq Scan on test (cost=0.00..1.01 *rows=1* width=4)\n(1 row)\n\nMy question here is, what it retreiving as rows when there is no such. One\nmore thing, if i wont do analyze and run the explain plan for three or more\ntimes, then catalogs getting updated automatically and resulting the correct\nrow as 1.\n\nQ2. Does explain , will update the catalogs automatically.\n\nRegards\nRaghavendra\n\nHi All,\n \nExample on optimizer\n===============\npostgres=# create table test(id int);CREATE TABLEpostgres=# insert into test VALUES (1);INSERT 0 1postgres=# select * from test; id---- 1(1 row)\npostgres=# explain select * from test; QUERY PLAN-------------------------------------------------------- Seq Scan on test (cost=0.00..34.00 rows=2400 width=4)\n(1 row)\nIn the above, example the optimizer is retreiving those many rows where there is only one row in that table. If i analyze am geting one row.\n \npostgres=# ANALYZE test;ANALYZEpostgres=# explain select * from test; QUERY PLAN---------------------------------------------------- Seq Scan on test (cost=0.00..1.01 rows=1 width=4)\n(1 row)\n \nMy question here is, what it retreiving as rows when there is no such. One more thing, if i wont do analyze and run the explain plan for three or more times, then catalogs getting updated automatically and resulting the correct row as 1. \n \nQ2. Does explain , will update the catalogs automatically.\n \nRegards\nRaghavendra",
"msg_date": "Sun, 28 Mar 2010 12:21:04 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer showing wrong rows in plan"
},
{
"msg_contents": "2010/3/28 Tadipathri Raghu <[email protected]>\n\n> Hi All,\n>\n> Example on optimizer\n> ===============\n> postgres=# create table test(id int);\n> CREATE TABLE\n> postgres=# insert into test VALUES (1);\n> INSERT 0 1\n> postgres=# select * from test;\n> id\n> ----\n> 1\n> (1 row)\n> postgres=# explain select * from test;\n> QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on test (cost=0.00..34.00 *rows=2400* width=4)\n> (1 row)\n> In the above, example the optimizer is retreiving those many rows where\n> there is only one row in that table. If i analyze am geting one row.\n>\n\nNo, the optimizer is not retrieving anything, it just assumes that there are\n2400 rows because that is the number of rows that exists in the statictics\nfor this table. The optimizer just tries to find the best plan and to\noptimize the query plan for execution taking into consideration all\ninformation that can be found for this table (it also looks in the\nstatistics information about rows from this table).\n\n\n>\n> postgres=# ANALYZE test;\n> ANALYZE\n> postgres=# explain select * from test;\n> QUERY PLAN\n> ----------------------------------------------------\n> Seq Scan on test (cost=0.00..1.01 *rows=1* width=4)\n> (1 row)\n>\n> My question here is, what it retreiving as rows when there is no such. One\n> more thing, if i wont do analyze and run the explain plan for three or more\n> times, then catalogs getting updated automatically and resulting the correct\n> row as 1.\n>\n>\n\nNow ANALYZE changed the statistics for this table and now the planner knows\nthat there is just one row. In the background there can work autovacuum so\nit changes rows automatically (the autovacuum work characteristic depends on\nthe settings for the database).\n\n\n> Q2. Does explain , will update the catalogs automatically.\n>\n>\n\nNo, explain doesn't update table's statistics.\n\n\nregards\nSzymon Guz\n\n2010/3/28 Tadipathri Raghu <[email protected]>\nHi All,\n \nExample on optimizer\n===============\npostgres=# create table test(id int);CREATE TABLEpostgres=# insert into test VALUES (1);INSERT 0 1postgres=# select * from test; id---- 1(1 row)\npostgres=# explain select * from test; QUERY PLAN-------------------------------------------------------- Seq Scan on test (cost=0.00..34.00 rows=2400 width=4)\n\n(1 row)\nIn the above, example the optimizer is retreiving those many rows where there is only one row in that table. If i analyze am geting one row.No, the optimizer is not retrieving anything, it just assumes that there are 2400 rows because that is the number of rows that exists in the statictics for this table. The optimizer just tries to find the best plan and to optimize the query plan for execution taking into consideration all information that can be found for this table (it also looks in the statistics information about rows from this table).\n \n \npostgres=# ANALYZE test;ANALYZEpostgres=# explain select * from test; QUERY PLAN---------------------------------------------------- Seq Scan on test (cost=0.00..1.01 rows=1 width=4)\n\n(1 row)\n \nMy question here is, what it retreiving as rows when there is no such. One more thing, if i wont do analyze and run the explain plan for three or more times, then catalogs getting updated automatically and resulting the correct row as 1. \n Now ANALYZE changed the statistics for this table and now the planner knows that there is just one row. In the background there can work autovacuum so it changes rows automatically (the autovacuum work characteristic depends on the settings for the database).\n \nQ2. Does explain , will update the catalogs automatically.\n No, explain doesn't update table's statistics.regardsSzymon Guz",
"msg_date": "Sun, 28 Mar 2010 09:02:14 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Hi Guz,\n\nThank you for the prompt reply.\n\n\n> No, the optimizer is not retrieving anything, it just assumes that there\n> are 2400 rows because that is the number of rows that exists in the\n> statictics for this table. The optimizer just tries to find the best plan\n> and to optimize the query plan for execution taking into consideration all\n> information that can be found for this table (it also looks in the\n> statistics information about rows from this table).\n\n\nSo, whats it assuming here as rows(2400). Could you explain this.\n\nRegards\nRaghavendra\nOn Sun, Mar 28, 2010 at 12:32 PM, Szymon Guz <[email protected]> wrote:\n\n> 2010/3/28 Tadipathri Raghu <[email protected]>\n>\n> Hi All,\n>>\n>> Example on optimizer\n>> ===============\n>> postgres=# create table test(id int);\n>> CREATE TABLE\n>> postgres=# insert into test VALUES (1);\n>> INSERT 0 1\n>> postgres=# select * from test;\n>> id\n>> ----\n>> 1\n>> (1 row)\n>> postgres=# explain select * from test;\n>> QUERY PLAN\n>> --------------------------------------------------------\n>> Seq Scan on test (cost=0.00..34.00 *rows=2400* width=4)\n>> (1 row)\n>> In the above, example the optimizer is retreiving those many rows where\n>> there is only one row in that table. If i analyze am geting one row.\n>>\n>\n> No, the optimizer is not retrieving anything, it just assumes that there\n> are 2400 rows because that is the number of rows that exists in the\n> statictics for this table. The optimizer just tries to find the best plan\n> and to optimize the query plan for execution taking into consideration all\n> information that can be found for this table (it also looks in the\n> statistics information about rows from this table).\n>\n>\n>>\n>> postgres=# ANALYZE test;\n>> ANALYZE\n>> postgres=# explain select * from test;\n>> QUERY PLAN\n>> ----------------------------------------------------\n>> Seq Scan on test (cost=0.00..1.01 *rows=1* width=4)\n>> (1 row)\n>>\n>> My question here is, what it retreiving as rows when there is no such. One\n>> more thing, if i wont do analyze and run the explain plan for three or more\n>> times, then catalogs getting updated automatically and resulting the correct\n>> row as 1.\n>>\n>>\n>\n> Now ANALYZE changed the statistics for this table and now the planner knows\n> that there is just one row. In the background there can work autovacuum so\n> it changes rows automatically (the autovacuum work characteristic depends on\n> the settings for the database).\n>\n>\n>> Q2. Does explain , will update the catalogs automatically.\n>>\n>>\n>\n> No, explain doesn't update table's statistics.\n>\n>\n> regards\n> Szymon Guz\n>\n\nHi Guz,\n \nThank you for the prompt reply.\n \nNo, the optimizer is not retrieving anything, it just assumes that there are 2400 rows because that is the number of rows that exists in the statictics for this table. The optimizer just tries to find the best plan and to optimize the query plan for execution taking into consideration all information that can be found for this table (it also looks in the statistics information about rows from this table).\n \nSo, whats it assuming here as rows(2400). Could you explain this.\n \nRegards\nRaghavendra\nOn Sun, Mar 28, 2010 at 12:32 PM, Szymon Guz <[email protected]> wrote:\n\n2010/3/28 Tadipathri Raghu <[email protected]>\n\n\nHi All,\n \nExample on optimizer\n===============\npostgres=# create table test(id int);CREATE TABLEpostgres=# insert into test VALUES (1);INSERT 0 1postgres=# select * from test; id---- 1(1 row)\npostgres=# explain select * from test; QUERY PLAN-------------------------------------------------------- Seq Scan on test (cost=0.00..34.00 rows=2400 width=4)\n(1 row)\nIn the above, example the optimizer is retreiving those many rows where there is only one row in that table. If i analyze am geting one row.\n\nNo, the optimizer is not retrieving anything, it just assumes that there are 2400 rows because that is the number of rows that exists in the statictics for this table. The optimizer just tries to find the best plan and to optimize the query plan for execution taking into consideration all information that can be found for this table (it also looks in the statistics information about rows from this table).\n\n \n\n \npostgres=# ANALYZE test;ANALYZEpostgres=# explain select * from test; QUERY PLAN---------------------------------------------------- Seq Scan on test (cost=0.00..1.01 rows=1 width=4)\n(1 row)\n \nMy question here is, what it retreiving as rows when there is no such. One more thing, if i wont do analyze and run the explain plan for three or more times, then catalogs getting updated automatically and resulting the correct row as 1. \n \n\nNow ANALYZE changed the statistics for this table and now the planner knows that there is just one row. In the background there can work autovacuum so it changes rows automatically (the autovacuum work characteristic depends on the settings for the database).\n\n \n\nQ2. Does explain , will update the catalogs automatically.\n \n\nNo, explain doesn't update table's statistics.\n\n\nregards\nSzymon Guz",
"msg_date": "Sun, 28 Mar 2010 12:41:07 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "2010/3/28 Tadipathri Raghu <[email protected]>\n\n> Hi Guz,\n>\n> Thank you for the prompt reply.\n>\n>\n>> No, the optimizer is not retrieving anything, it just assumes that there\n>> are 2400 rows because that is the number of rows that exists in the\n>> statictics for this table. The optimizer just tries to find the best plan\n>> and to optimize the query plan for execution taking into consideration all\n>> information that can be found for this table (it also looks in the\n>> statistics information about rows from this table).\n>\n>\n> So, whats it assuming here as rows(2400). Could you explain this.\n>\n>\n>\nIt is assuming that there are 2400 rows in this table. Probably you've\ndeleted some rows from the table leaving just one.\n\nregards\nSzymon Guz\n\n2010/3/28 Tadipathri Raghu <[email protected]>\nHi Guz,\n \nThank you for the prompt reply.\n \nNo, the optimizer is not retrieving anything, it just assumes that there are 2400 rows because that is the number of rows that exists in the statictics for this table. The optimizer just tries to find the best plan and to optimize the query plan for execution taking into consideration all information that can be found for this table (it also looks in the statistics information about rows from this table).\n \nSo, whats it assuming here as rows(2400). Could you explain this.\n \nIt is assuming that there are 2400 rows in this table. Probably you've deleted some rows from the table leaving just one.regardsSzymon Guz",
"msg_date": "Sun, 28 Mar 2010 09:29:46 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Hi Guz,\n\n\n> It is assuming that there are 2400 rows in this table. Probably you've\n> deleted some rows from the table leaving just one.\n\n\nFrankly speaking its a newly created table without any operation on it as\nyou have seen the example. Then how come it showing those many rows where we\nhave only one in it.\nThanks if we have proper explination on this..\n\nRegards\nRaghavendra\n\n\nOn Sun, Mar 28, 2010 at 12:59 PM, Szymon Guz <[email protected]> wrote:\n\n>\n>\n> 2010/3/28 Tadipathri Raghu <[email protected]>\n>\n>> Hi Guz,\n>>\n>> Thank you for the prompt reply.\n>>\n>>\n>>> No, the optimizer is not retrieving anything, it just assumes that there\n>>> are 2400 rows because that is the number of rows that exists in the\n>>> statictics for this table. The optimizer just tries to find the best plan\n>>> and to optimize the query plan for execution taking into consideration all\n>>> information that can be found for this table (it also looks in the\n>>> statistics information about rows from this table).\n>>\n>>\n>> So, whats it assuming here as rows(2400). Could you explain this.\n>>\n>>\n>>\n> It is assuming that there are 2400 rows in this table. Probably you've\n> deleted some rows from the table leaving just one.\n>\n> regards\n> Szymon Guz\n>\n>\n\nHi Guz,\n \nIt is assuming that there are 2400 rows in this table. Probably you've deleted some rows from the table leaving just one.\n \nFrankly speaking its a newly created table without any operation on it as you have seen the example. Then how come it showing those many rows where we have only one in it.\nThanks if we have proper explination on this..\n \nRegards\nRaghavendra\n \nOn Sun, Mar 28, 2010 at 12:59 PM, Szymon Guz <[email protected]> wrote:\n\n\n2010/3/28 Tadipathri Raghu <[email protected]>\n\nHi Guz,\n\n \nThank you for the prompt reply.\n\n \nNo, the optimizer is not retrieving anything, it just assumes that there are 2400 rows because that is the number of rows that exists in the statictics for this table. The optimizer just tries to find the best plan and to optimize the query plan for execution taking into consideration all information that can be found for this table (it also looks in the statistics information about rows from this table).\n \nSo, whats it assuming here as rows(2400). Could you explain this.\n \n\n\nIt is assuming that there are 2400 rows in this table. Probably you've deleted some rows from the table leaving just one.\n\nregardsSzymon Guz",
"msg_date": "Sun, 28 Mar 2010 13:03:48 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Hi All,\n\nI want to give some more light on this by analysing more like this\n\n1. In my example I have created a table with one column as INT( which\noccupies 4 bytes)\n2. Initially it occupies one page of space on the file that is (8kb).\n\nSo, here is it assuming these many rows may fit in this page. Clarify me on\nthis Please.\n\nRegards\nRaghavendra\n\n\nOn Sun, Mar 28, 2010 at 2:06 PM, Gary Doades <[email protected]> wrote:\n\n> On 28/03/2010 8:33 AM, Tadipathri Raghu wrote:\n>\n> Hi Guz,\n>\n>\n>> It is assuming that there are 2400 rows in this table. Probably you've\n>> deleted some rows from the table leaving just one.\n>\n>\n> Frankly speaking its a newly created table without any operation on it as\n> you have seen the example. Then how come it showing those many rows where we\n> have only one in it.\n> Thanks if we have proper explination on this..\n>\n> It's not *showing* any rows at all, it's *guessing* 2400 rows because\n> you've never analyzed the table. Without any statistics at all, postgres\n> will use some form of in-built guess for a table that produces reasonable\n> plans under average conditions. As you've already seen, once you analyze the\n> table, the guess get's much better and therefore would give you a more\n> appropriate plan.\n>\n> Regards,\n> Gary.\n>\n>\n\nHi All,\n \nI want to give some more light on this by analysing more like this\n \n1. In my example I have created a table with one column as INT( which occupies 4 bytes)\n2. Initially it occupies one page of space on the file that is (8kb).\n \nSo, here is it assuming these many rows may fit in this page. Clarify me on this Please.\n \nRegards\nRaghavendra\n \nOn Sun, Mar 28, 2010 at 2:06 PM, Gary Doades <[email protected]> wrote:\n\n\nOn 28/03/2010 8:33 AM, Tadipathri Raghu wrote: \n\nHi Guz,\n \nIt is assuming that there are 2400 rows in this table. Probably you've deleted some rows from the table leaving just one.\n \nFrankly speaking its a newly created table without any operation on it as you have seen the example. Then how come it showing those many rows where we have only one in it.\nThanks if we have proper explination on this..It's not *showing* any rows at all, it's *guessing* 2400 rows because you've never analyzed the table. Without any statistics at all, postgres will use some form of in-built guess for a table that produces reasonable plans under average conditions. As you've already seen, once you analyze the table, the guess get's much better and therefore would give you a more appropriate plan.\nRegards,Gary.",
"msg_date": "Sun, 28 Mar 2010 14:37:13 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Op 28 mrt 2010, om 11:07 heeft Tadipathri Raghu het volgende geschreven:\n\n> Hi All,\n>\n> I want to give some more light on this by analysing more like this\n>\n> 1. In my example I have created a table with one column as \n> INT( which occupies 4 bytes)\n> 2. Initially it occupies one page of space on the file that is (8kb).\n>\n> So, here is it assuming these many rows may fit in this page. \n> Clarify me on this Please.\n\nSee these chapters in the manual: http://www.postgresql.org/docs/8.4/interactive/storage.html\n\nThe minimum size of a file depends on the block size, by default 8kb: http://www.postgresql.org/docs/8.4/interactive/install-procedure.html\n\nRegards,\nFrank\n\n\n>\n> Regards\n> Raghavendra\n>\n>\n> On Sun, Mar 28, 2010 at 2:06 PM, Gary Doades <[email protected]> wrote:\n> On 28/03/2010 8:33 AM, Tadipathri Raghu wrote:\n>>\n>> Hi Guz,\n>>\n>> It is assuming that there are 2400 rows in this table. Probably \n>> you've deleted some rows from the table leaving just one.\n>>\n>> Frankly speaking its a newly created table without any operation on \n>> it as you have seen the example. Then how come it showing those \n>> many rows where we have only one in it.\n>> Thanks if we have proper explination on this..\n> It's not *showing* any rows at all, it's *guessing* 2400 rows \n> because you've never analyzed the table. Without any statistics at \n> all, postgres will use some form of in-built guess for a table that \n> produces reasonable plans under average conditions. As you've \n> already seen, once you analyze the table, the guess get's much \n> better and therefore would give you a more appropriate plan.\n>\n> Regards,\n> Gary.\n>\n\n\n\n\n\n\nOp 28 mrt 2010, om 11:07 heeft Tadipathri Raghu het volgende geschreven:Hi All, I want to give some more light on this by analysing more like this 1. In my example I have created a table with one column as INT( which occupies 4 bytes) 2. Initially it occupies one page of space on the file that is (8kb). So, here is it assuming these many rows may fit in this page. Clarify me on this Please. See these chapters in the manual: http://www.postgresql.org/docs/8.4/interactive/storage.htmlThe minimum size of a file depends on the block size, by default 8kb: http://www.postgresql.org/docs/8.4/interactive/install-procedure.htmlRegards,Frank Regards Raghavendra On Sun, Mar 28, 2010 at 2:06 PM, Gary Doades <[email protected]> wrote: On 28/03/2010 8:33 AM, Tadipathri Raghu wrote: Hi Guz, It is assuming that there are 2400 rows in this table. Probably you've deleted some rows from the table leaving just one. Frankly speaking its a newly created table without any operation on it as you have seen the example. Then how come it showing those many rows where we have only one in it. Thanks if we have proper explination on this..It's not *showing* any rows at all, it's *guessing* 2400 rows because you've never analyzed the table. Without any statistics at all, postgres will use some form of in-built guess for a table that produces reasonable plans under average conditions. As you've already seen, once you analyze the table, the guess get's much better and therefore would give you a more appropriate plan. Regards,Gary.",
"msg_date": "Sun, 28 Mar 2010 11:18:27 +0200",
"msg_from": "Frank Heikens <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "On 28/03/2010 10:07 AM, Tadipathri Raghu wrote:\n> Hi All,\n> I want to give some more light on this by analysing more like this\n> 1. In my example I have created a table with one column as INT( which \n> occupies 4 bytes)\n> 2. Initially it occupies one page of space on the file that is (8kb).\n> So, here is it assuming these many rows may fit in this page. Clarify \n> me on this Please.\n\nLike I said, it's just a guess. With no statistics all postgres can do \nis guess, or in this case use the in-built default for a newly created \ntable. It could guess 1 or it could guess 10,000,000. What it does is \nproduce a reasonable guess in the absence of any other information.\n\nYou should read the postgres documentation for further information about \nstatistics and how the optimizer uses them.\n\n\nRegards,\nGary.\n\n\n",
"msg_date": "Sun, 28 Mar 2010 10:27:22 +0100",
"msg_from": "Gary Doades <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Tadipathri Raghu <[email protected]> writes:\n> Frankly speaking its a newly created table without any operation on it as\n> you have seen the example. Then how come it showing those many rows where we\n> have only one in it.\n\nYes. This is intentional: the size estimates for a never-yet-analyzed\ntable are *not* zero. This is because people frequently create and load\nup a table and then immediately query it without an explicit ANALYZE.\nThe quality of the plans you'd get at that point (especially for joins)\nwould be spectacularly bad if the default assumption were that the table\nwas very small.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Mar 2010 12:27:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan "
},
{
"msg_contents": "On 03/28/2010 05:27 PM, Tom Lane wrote:\n> This is intentional: the size estimates for a never-yet-analyzed\n> table are *not* zero. This is because people frequently create and load\n> up a table and then immediately query it without an explicit ANALYZE.\n\nDoes the creation of an index also populate statistics?\n\nThanks,\n Jeremy\n",
"msg_date": "Sun, 28 Mar 2010 18:08:45 +0100",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Jeremy Harris <[email protected]> writes:\n> On 03/28/2010 05:27 PM, Tom Lane wrote:\n>> This is intentional: the size estimates for a never-yet-analyzed\n>> table are *not* zero. This is because people frequently create and load\n>> up a table and then immediately query it without an explicit ANALYZE.\n\n> Does the creation of an index also populate statistics?\n\nIIRC, it will set the relpages/reltuples counts (though not any\nmore-complex statistics); but only if the table is found to not be\ncompletely empty. Again, this is a behavior designed with common\nusage patterns in mind, to not set relpages/reltuples to zero on a\ntable that's likely to get populated shortly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Mar 2010 13:37:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan "
},
{
"msg_contents": "Hi Tom,\n\nThank for the update.\n\n\n> IIRC, it will set the relpages/reltuples counts (though not any\n> more-complex statistics); but only if the table is found to not be\n> completely empty. Again, this is a behavior designed with common\n> usage patterns in mind, to not set relpages/reltuples to zero on a\n> table that's likely to get populated shortly.\n>\nAs Harris, asked about creation of index will update the statistics. Yes\nindexes are updating the statistics, so indexes will analyze the table on\nthe backend and update the statistics too, before it creating the index or\nafter creating the index.\n\nExample\n======\npostgres=# create table test(id int);\nCREATE TABLE\npostgres=# insert into test VALUES (1);\nINSERT 0 1\npostgres=# select relname,reltuples,relpages from pg_class where\nrelname='test';\n relname | reltuples | relpages\n---------+-----------+----------\n test | 0 | 0\n(1 row)\npostgres=# create INDEX itest on test (id);\nCREATE INDEX\npostgres=# select relname,reltuples,relpages from pg_class where\nrelname='test';\n relname | reltuples | relpages\n---------+-----------+----------\n test | 1 | 1\n(1 row)\n\nAdding one more thing to this thread\n==========================\nAs per the documentation, one page is 8kb, when i create a table with int as\none column its 4 bytes. If i insert 2000 rows, it should be in one page only\nas its 8kb, but its extending vastly as expected. Example shown below,\ntaking the previous example table test with one column.\n\npostgres=# insert into test VALUES (generate_series(2,2000));\nINSERT 0 1999\npostgres=# \\dt+\n List of relations\n Schema | Name | Type | Owner | Size | Description\n----------+------+-------+----------+-------+-------------\n edbstore | test | table | postgres | 64 kB |\n(1 row)\npostgres=# select count(*) from test ;\n count\n-------\n 2000\n(1 row)\n\nWhy the its extending so many pages, where it can fit in one page. Is there\nany particular reason in behaving this type of paging.\n\nThanks for all in advance\n\nRegards\nRaghavendra\n\n\n\nOn Sun, Mar 28, 2010 at 11:07 PM, Tom Lane <[email protected]> wrote:\n\n> Jeremy Harris <[email protected]> writes:\n> > On 03/28/2010 05:27 PM, Tom Lane wrote:\n> >> This is intentional: the size estimates for a never-yet-analyzed\n> >> table are *not* zero. This is because people frequently create and load\n> >> up a table and then immediately query it without an explicit ANALYZE.\n>\n> > Does the creation of an index also populate statistics?\n>\n> IIRC, it will set the relpages/reltuples counts (though not any\n> more-complex statistics); but only if the table is found to not be\n> completely empty. Again, this is a behavior designed with common\n> usage patterns in mind, to not set relpages/reltuples to zero on a\n> table that's likely to get populated shortly.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Tom,\n \nThank for the update.\n \nIIRC, it will set the relpages/reltuples counts (though not anymore-complex statistics); but only if the table is found to not be\ncompletely empty. Again, this is a behavior designed with commonusage patterns in mind, to not set relpages/reltuples to zero on atable that's likely to get populated shortly.\nAs Harris, asked about creation of index will update the statistics. Yes indexes are updating the statistics, so indexes will analyze the table on the backend and update the statistics too, before it creating the index or after creating the index.\n \nExample\n======\npostgres=# create table test(id int);CREATE TABLEpostgres=# insert into test VALUES (1);INSERT 0 1postgres=# select relname,reltuples,relpages from pg_class where relname='test'; relname | reltuples | relpages\n---------+-----------+---------- test | 0 | 0(1 row)\npostgres=# create INDEX itest on test (id);CREATE INDEXpostgres=# select relname,reltuples,relpages from pg_class where relname='test'; relname | reltuples | relpages---------+-----------+----------\n test | 1 | 1(1 row)\n \nAdding one more thing to this thread\n==========================\nAs per the documentation, one page is 8kb, when i create a table with int as one column its 4 bytes. If i insert 2000 rows, it should be in one page only as its 8kb, but its extending vastly as expected. Example shown below, taking the previous example table test with one column.\n \npostgres=# insert into test VALUES (generate_series(2,2000));INSERT 0 1999postgres=# \\dt+ List of relations Schema | Name | Type | Owner | Size | Description----------+------+-------+----------+-------+-------------\n edbstore | test | table | postgres | 64 kB |(1 row)\npostgres=# select count(*) from test ; count------- 2000(1 row)\n \nWhy the its extending so many pages, where it can fit in one page. Is there any particular reason in behaving this type of paging.\n \nThanks for all in advance\n \nRegards\nRaghavendra\n \n \n \nOn Sun, Mar 28, 2010 at 11:07 PM, Tom Lane <[email protected]> wrote:\n\nJeremy Harris <[email protected]> writes:> On 03/28/2010 05:27 PM, Tom Lane wrote:>> This is intentional: the size estimates for a never-yet-analyzed\n>> table are *not* zero. This is because people frequently create and load>> up a table and then immediately query it without an explicit ANALYZE.> Does the creation of an index also populate statistics?\nIIRC, it will set the relpages/reltuples counts (though not anymore-complex statistics); but only if the table is found to not becompletely empty. Again, this is a behavior designed with commonusage patterns in mind, to not set relpages/reltuples to zero on a\ntable that's likely to get populated shortly. regards, tom lane\n\n\n--Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 29 Mar 2010 10:26:33 +0530",
"msg_from": "Tadipathri Raghu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "On Mon, 29 Mar 2010, Tadipathri Raghu wrote:\n> As per the documentation, one page is 8kb, when i create a table with int as\n> one column its 4 bytes. If i insert 2000 rows, it should be in one page only\n> as its 8kb, but its extending vastly as expected. Example shown below,\n> taking the previous example table test with one column.\n\nThere is more to a row than just the single int column. The space used by \na column will include a column start marker (data length), transaction \nids, hint bits, an oid, a description of the types of the columns, and \nfinally your data columns. That takes a bit more space.\n\nMatthew\n\n-- \n If you let your happiness depend upon how somebody else feels about you,\n now you have to control how somebody else feels about you. -- Abraham Hicks\n",
"msg_date": "Mon, 29 Mar 2010 12:17:50 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "Hi Mattew,\n\nThank you for the information.\n\nOnce again, I like to thank each and everyone in this thread for there\nultimate support.\n\nRegards\nRaghavendra\n\nOn Mon, Mar 29, 2010 at 4:47 PM, Matthew Wakeling <[email protected]>wrote:\n\n> On Mon, 29 Mar 2010, Tadipathri Raghu wrote:\n>\n>> As per the documentation, one page is 8kb, when i create a table with int\n>> as\n>> one column its 4 bytes. If i insert 2000 rows, it should be in one page\n>> only\n>> as its 8kb, but its extending vastly as expected. Example shown below,\n>> taking the previous example table test with one column.\n>>\n>\n> There is more to a row than just the single int column. The space used by a\n> column will include a column start marker (data length), transaction ids,\n> hint bits, an oid, a description of the types of the columns, and finally\n> your data columns. That takes a bit more space.\n>\n> Matthew\n>\n> --\n> If you let your happiness depend upon how somebody else feels about you,\n> now you have to control how somebody else feels about you. -- Abraham Hicks\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Mattew,\n \nThank you for the information. \n \nOnce again, I like to thank each and everyone in this thread for there ultimate support.\n \nRegards\nRaghavendra\nOn Mon, Mar 29, 2010 at 4:47 PM, Matthew Wakeling <[email protected]> wrote:\nOn Mon, 29 Mar 2010, Tadipathri Raghu wrote:\nAs per the documentation, one page is 8kb, when i create a table with int asone column its 4 bytes. If i insert 2000 rows, it should be in one page only\nas its 8kb, but its extending vastly as expected. Example shown below,taking the previous example table test with one column.There is more to a row than just the single int column. The space used by a column will include a column start marker (data length), transaction ids, hint bits, an oid, a description of the types of the columns, and finally your data columns. That takes a bit more space.\nMatthew-- If you let your happiness depend upon how somebody else feels about you,now you have to control how somebody else feels about you. -- Abraham Hicks-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 29 Mar 2010 17:54:43 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
},
{
"msg_contents": "See http://www.postgresql.org/docs/current/static/storage-page-layout.html for\nall of what is taking up the space. Short version:\n Per block overhead is > 24 bytes\n Per row overhead is 23 bytes + some alignment loss + the null bitmap if you\nhave nullable columns\n\nOn Mon, Mar 29, 2010 at 8:24 AM, raghavendra t <[email protected]>wrote:\n\n> Hi Mattew,\n>\n> Thank you for the information.\n>\n> Once again, I like to thank each and everyone in this thread for there\n> ultimate support.\n>\n> Regards\n> Raghavendra\n>\n> On Mon, Mar 29, 2010 at 4:47 PM, Matthew Wakeling <[email protected]>wrote:\n>\n>> On Mon, 29 Mar 2010, Tadipathri Raghu wrote:\n>>\n>>> As per the documentation, one page is 8kb, when i create a table with int\n>>> as\n>>> one column its 4 bytes. If i insert 2000 rows, it should be in one page\n>>> only\n>>> as its 8kb, but its extending vastly as expected. Example shown below,\n>>> taking the previous example table test with one column.\n>>>\n>>\n>> There is more to a row than just the single int column. The space used by\n>> a column will include a column start marker (data length), transaction ids,\n>> hint bits, an oid, a description of the types of the columns, and finally\n>> your data columns. That takes a bit more space.\n>>\n>> Matthew\n>>\n>> --\n>> If you let your happiness depend upon how somebody else feels about you,\n>> now you have to control how somebody else feels about you. -- Abraham\n>> Hicks\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nSee http://www.postgresql.org/docs/current/static/storage-page-layout.html for all of what is taking up the space. Short version: \n\n Per block overhead is > 24 bytes Per row overhead is 23 bytes + some alignment loss + the null bitmap if you have nullable columnsOn Mon, Mar 29, 2010 at 8:24 AM, raghavendra t <[email protected]> wrote:\nHi Mattew,\n \nThank you for the information. \n \nOnce again, I like to thank each and everyone in this thread for there ultimate support.\n \nRegards\nRaghavendra\nOn Mon, Mar 29, 2010 at 4:47 PM, Matthew Wakeling <[email protected]> wrote:\nOn Mon, 29 Mar 2010, Tadipathri Raghu wrote:\nAs per the documentation, one page is 8kb, when i create a table with int asone column its 4 bytes. If i insert 2000 rows, it should be in one page only\n\n\nas its 8kb, but its extending vastly as expected. Example shown below,taking the previous example table test with one column.There is more to a row than just the single int column. The space used by a column will include a column start marker (data length), transaction ids, hint bits, an oid, a description of the types of the columns, and finally your data columns. That takes a bit more space.\nMatthew-- If you let your happiness depend upon how somebody else feels about you,now you have to control how somebody else feels about you. -- Abraham Hicks-- Sent via pgsql-performance mailing list ([email protected])\n\n\nTo make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 29 Mar 2010 09:43:18 -0400",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer showing wrong rows in plan"
}
] |
[
{
"msg_contents": "We have a postgres database which accessed by clients app via PL/PGSQL \nstored procedures.\n\nFor some reasons we use about 25 temp tables \"on commit delete rows\". It \nwidely used by our SP. I can see a stramge delay at any “begin” and \n“commit”:\n\n2010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 20.809 ms \nstatement: BEGIN\n2010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 0.809 ms \nstatement: SELECT empl.BL_CustomerFreeCLGet('384154676925391', '8189', \nNULL)\n010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 0.283 ms \nstatement: FETCH ALL IN \"<unnamed portal 165>\"; -- \n+++empl.BL_CustomerFreeCLGet+++<<21360>>\n2010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 19.895 ms \nstatement: COMMIT\n\nThe more system load and more temp table used in session, then more \n“begin” and “commit” times.\nThis occure only with temp table \"on commit delete rows\".\n\nTest example below:\n\ncreate database test;\ncreate language plpgsql;\nCREATE OR REPLACE FUNCTION test_connectionprepare(in_create \nbool,in_IsTemp bool,in_DelOnCommit bool,in_TableCount int)\n RETURNS boolean AS $$\n\ndeclare\n m_count int := 50;\n m_isTemp bool;\n\nbegin\n\nm_count := coalesce(in_TableCount,m_count);\n\nFOR i IN 0..m_count LOOP\n\nif in_create then\n execute 'create ' || case when in_IsTemp then ' temp ' else ' ' end \n||' table tmp_table_'\n || i::text || '(id int,pid int,name text) '\n || case when in_DelOnCommit then ' on commit delete rows ' \nelse ' ' end || ';';\nelse\n execute 'drop table if exists tmp_table_' || i::text ||';';\nend if;\n\nEND LOOP;\n\n return in_create;\nend;\n$$ LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER;\n------------------------------------------------------------------------------\n\nNow run pgScript:\nDECLARE @I;\nSET @I = 1;\nWHILE @I <= 100\nBEGIN\n\nselect now();\n\n SET @I = @I + 1;\nEND\n\nIt spent about 2200-2300 ms on my server.\n\nLet's create 50 temp tables: select \ntest_connectionprepare(true,true,true,100);\nand run script againe. We can see 2-3 times slowing!\n\ntemp tables number - test run time:\n\n0 - 2157-2187\n10 - 2500-2704\n50 - 5900-6000\n100 - 7900-8000\n500 - 43000+\n\n------------------------------------------------------------------------------\n\nSorry for my english.\n\nMy server info:\n\"PostgreSQL 8.4.1 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real \n(GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit\"\nLinux u16 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 \nGNU/Linux\n4xOpteron 16 processor cores.\n\n\n",
"msg_date": "Mon, 29 Mar 2010 17:59:02 +0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "transaction overhead at \"on commit delete rows\";"
}
] |
[
{
"msg_contents": "PostgreSQL 8.4.3\n\nOS: Linux Red Hat 4.x\n\n \n\nI changed my strategy with PostgreSQL recently to use a large segment of\nmemory for shared buffers with the idea of caching disk blocks. How can\nI see how much memory PostgreSQL is using for this?\n\n \n\nI tried:\n\n \n\nps aux | grep post | sort -k4\n\n \n\nThis lists the processes using memory at the bottom. Are they sharing\nmemory or using individual their own blocks of memory?\n\n \n\nWhen I do top I see that VIRT is the value of my shared buffers plus a\ntiny bit. I see %MEM is only 2.4%, 2.6%, 1.0%,1.5%, and 1.1% for all of\nthe running processes. Do I add these percentages up to see what amount\nof VIRT I am really using? \n\n \n\nOr is there some way to ask PostgreSQL how much memory are you using to\ncache disk blocks currently?\n\n \n\nWhen you do a PG_DUMP does PostgreSQL put the disk blocks into shared\nbuffers as it runs? \n\n \n\nThanks,\n\n \n\nLance Campbell\n\nSoftware Architect/DBA/Project Manager\n\nWeb Services at Public Affairs\n\n217-333-0382\n\n \n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL 8.4.3\nOS: Linux Red Hat 4.x\n \nI changed my strategy with PostgreSQL recently to use a\nlarge segment of memory for shared buffers with the idea of caching disk blocks. \nHow can I see how much memory PostgreSQL is using for this?\n \nI tried:\n \nps aux | grep post | sort –k4\n \nThis lists the processes using memory at the bottom. \nAre they sharing memory or using individual their own blocks of memory?\n \nWhen I do top I see that VIRT is the value of my shared\nbuffers plus a tiny bit. I see %MEM is only 2.4%, 2.6%, 1.0%,1.5%, and\n1.1% for all of the running processes. Do I add these percentages up to\nsee what amount of VIRT I am really using? \n \nOr is there some way to ask PostgreSQL how much memory are\nyou using to cache disk blocks currently?\n \nWhen you do a PG_DUMP does PostgreSQL put the disk blocks\ninto shared buffers as it runs? \n \nThanks,\n \nLance Campbell\nSoftware Architect/DBA/Project Manager\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Mon, 29 Mar 2010 09:59:31 -0500",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How much memory is PostgreSQL using"
},
{
"msg_contents": "Campbell, Lance wrote:\n>\n> Or is there some way to ask PostgreSQL how much memory are you using \n> to cache disk blocks currently?\n>\n\nYou can install contrib/pg_buffercache into each database and count how \nmany used blocks are there. Note that running queries using that \ndiagnostic tool is really intensive due to the locks it takes, so be \ncareful not to do that often on a production system.\n\n\n> When you do a PG_DUMP does PostgreSQL put the disk blocks into shared \n> buffers as it runs?\n>\n\nTo some extent. Most pg_dump activity involves sequential scans that \nare reading an entire table. Those are no different from any other \nprocess that will put disk blocks into shared_buffers. However, that \nusage pattern makes pg_dump particularly likely to run into an \noptimization in 8.3 and later that limits how much of shared_buffers is \nused when sequentially scanning a large table. See P10 of \nhttp://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf \nfor the exact implementation. Basically, anything bigger than \nshared_buffers / 4 uses a 256K ring to limit its cache use, but it's a \nlittle more complicated than that.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 29 Mar 2010 12:53:33 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How much memory is PostgreSQL using"
},
{
"msg_contents": "Greg,\nThanks for your help.\n\n1) How does the number of buffers provided by pg_buffercache compare to\nmemory (buffers * X = Y meg)? \n2) Is there a way to tell how many total buffers I have available/max?\n\nThanks,\n\nLance Campbell\nSoftware Architect/DBA/Project Manager\nWeb Services at Public Affairs\n217-333-0382\n\n\n-----Original Message-----\nFrom: Greg Smith [mailto:[email protected]] \nSent: Monday, March 29, 2010 11:54 AM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] How much memory is PostgreSQL using\n\nCampbell, Lance wrote:\n>\n> Or is there some way to ask PostgreSQL how much memory are you using \n> to cache disk blocks currently?\n>\n\nYou can install contrib/pg_buffercache into each database and count how \nmany used blocks are there. Note that running queries using that \ndiagnostic tool is really intensive due to the locks it takes, so be \ncareful not to do that often on a production system.\n\n\n> When you do a PG_DUMP does PostgreSQL put the disk blocks into shared \n> buffers as it runs?\n>\n\nTo some extent. Most pg_dump activity involves sequential scans that \nare reading an entire table. Those are no different from any other \nprocess that will put disk blocks into shared_buffers. However, that \nusage pattern makes pg_dump particularly likely to run into an \noptimization in 8.3 and later that limits how much of shared_buffers is \nused when sequentially scanning a large table. See P10 of \nhttp://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf \nfor the exact implementation. Basically, anything bigger than \nshared_buffers / 4 uses a 256K ring to limit its cache use, but it's a \nlittle more complicated than that.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 2 Apr 2010 15:10:05 -0500",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How much memory is PostgreSQL using"
},
{
"msg_contents": "Le 02/04/2010 22:10, Campbell, Lance a �crit :\n> Greg,\n> Thanks for your help.\n> \n> 1) How does the number of buffers provided by pg_buffercache compare to\n> memory (buffers * X = Y meg)? \n\n1 buffer is 8 KB.\n\n> 2) Is there a way to tell how many total buffers I have available/max?\n\nWith pg_buffercache, yes.\n\nSELECT count(*)\nFROM pg_buffercache\nWHERE relfilenode IS NOT NULL;\n\nshould give you the number of non-empty buffers.\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n",
"msg_date": "Sat, 03 Apr 2010 13:58:35 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How much memory is PostgreSQL using"
}
] |
[
{
"msg_contents": "Hi,\n\nI am querying a Postgresql 8.3 database table that has approximately 22 million records. The (explain analyze) query is listed below:\n\ngdr_gbrowse_live=> explain analyze SELECT f.id,f.object,f.typeid,f.seqid,f.start,f.end,f.strand FROM feature as f, name as n WHERE (n.id=f.id AND lower(n.name) LIKE 'Scaffold:scaffold_163:1000..1199%' AND n.display_name>0);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.01..5899.93 rows=734 width=884) (actual time=0.033..0.033 rows=0 loops=1)\n -> Index Scan using name_name_lower_pattern_ops_idx on name n (cost=0.01..9.53 rows=734 width=4) (actual time=0.032..0.032 rows=0 loops=1)\n Index Cond: ((lower((name)::text) ~>=~ 'Scaffold:scaffold'::text) AND (lower((name)::text) ~<~ 'Scaffold:scaffole'::text))\n Filter: ((display_name > 0) AND (lower((name)::text) ~~ 'Scaffold:scaffold_163:1000..1199%'::text))\n -> Index Scan using feature_pkey on feature f (cost=0.00..8.01 rows=1 width=884) (never executed)\n Index Cond: (f.id = n.id)\n Total runtime: 0.119 ms\n(7 rows)\n\nI can see I am hitting an index using an index that I created using the varchar_pattern_ops setting. This is very fast and performs like I would expect. However, when my application, GBrowse, access the database, I see in my slow query log this:\n\n2010-03-29 09:34:38.083 PDT,\"gdr_gbrowse_live\",\"gdr_gbrowse_live\",11649,\"10.0.0.235:59043\",4bb0399d.2d81,8,\"SELECT\",2010-03-28 22:24:45 PDT,4/118607,0,LOG,00000,\"duration: 21467.467 ms execute dbdpg_p25965_9: SELECT f.id,f.object,f.typeid,f.seqid,f.start,f.end,f.strand\n FROM feature as f, name as n\n WHERE (n.id=f.id AND lower(n.name) LIKE $1)\n \n\",\"parameters: $1 = 'Scaffold:scaffold\\_163:1000..1199%'\",,,,,,,\n\nGBrowse is a perl based application. Looking at the duration for this query is around 21 seconds. That is a bit long. Does anyone have any ideas why the query duration is so different?\n\nRandall Svancara\nSystems Administrator/DBA/Developer\nMain Bioinformatics Laboratory\n\n\n",
"msg_date": "Mon, 29 Mar 2010 08:42:48 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Performance regarding LIKE searches"
},
{
"msg_contents": "[email protected] writes:\n> I can see I am hitting an index using an index that I created using the varchar_pattern_ops setting. This is very fast and performs like I would expect. However, when my application, GBrowse, access the database, I see in my slow query log this:\n\n> 2010-03-29 09:34:38.083 PDT,\"gdr_gbrowse_live\",\"gdr_gbrowse_live\",11649,\"10.0.0.235:59043\",4bb0399d.2d81,8,\"SELECT\",2010-03-28 22:24:45 PDT,4/118607,0,LOG,00000,\"duration: 21467.467 ms execute dbdpg_p25965_9: SELECT f.id,f.object,f.typeid,f.seqid,f.start,f.end,f.strand\n> FROM feature as f, name as n\n> WHERE (n.id=f.id AND lower(n.name) LIKE $1)\n \n> \",\"parameters: $1 = 'Scaffold:scaffold\\_163:1000..1199%'\",,,,,,,\n\n> GBrowse is a perl based application. Looking at the duration for this query is around 21 seconds. That is a bit long. Does anyone have any ideas why the query duration is so different?\n\nYou're not going to get an index optimization when the LIKE pattern\nisn't a constant (and left-anchored, but this is).\n\nIt is possible to get the planner to treat a query parameter as a\nconstant (implying a re-plan on each execution instead of having a\ncached plan). I believe what you have to do at the moment is use\nunnamed rather than named prepared statements. The practicality of\nthis would depend a lot on your client-side software stack, which\nyou didn't mention.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 2010 13:00:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regarding LIKE searches "
},
{
"msg_contents": "Tom,\n\nWe are using perl 5.10 with postgresql DBD. Can you point me in the right direction in terms of unamed and named prepared statements?\n\nThanks,\n\nRandall Svancara\nSystems Administrator/DBA/Developer\nMain Bioinformatics Laboratory\n\n\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: [email protected]\nCc: [email protected]\nSent: Monday, March 29, 2010 10:00:03 AM\nSubject: Re: [PERFORM] Performance regarding LIKE searches \n\[email protected] writes:\n> I can see I am hitting an index using an index that I created using the varchar_pattern_ops setting. This is very fast and performs like I would expect. However, when my application, GBrowse, access the database, I see in my slow query log this:\n\n> 2010-03-29 09:34:38.083 PDT,\"gdr_gbrowse_live\",\"gdr_gbrowse_live\",11649,\"10.0.0.235:59043\",4bb0399d.2d81,8,\"SELECT\",2010-03-28 22:24:45 PDT,4/118607,0,LOG,00000,\"duration: 21467.467 ms execute dbdpg_p25965_9: SELECT f.id,f.object,f.typeid,f.seqid,f.start,f.end,f.strand\n> FROM feature as f, name as n\n> WHERE (n.id=f.id AND lower(n.name) LIKE $1)\n \n> \",\"parameters: $1 = 'Scaffold:scaffold\\_163:1000..1199%'\",,,,,,,\n\n> GBrowse is a perl based application. Looking at the duration for this query is around 21 seconds. That is a bit long. Does anyone have any ideas why the query duration is so different?\n\nYou're not going to get an index optimization when the LIKE pattern\nisn't a constant (and left-anchored, but this is).\n\nIt is possible to get the planner to treat a query parameter as a\nconstant (implying a re-plan on each execution instead of having a\ncached plan). I believe what you have to do at the moment is use\nunnamed rather than named prepared statements. The practicality of\nthis would depend a lot on your client-side software stack, which\nyou didn't mention.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Mar 2010 09:23:33 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Performance regarding LIKE searches"
},
{
"msg_contents": "On 3/29/2010 12:23 PM, [email protected] wrote:\n> Tom,\n>\n> We are using perl 5.10 with postgresql DBD. Can you point me in the right direction in terms of unamed and named prepared statements?\n>\n> Thanks,\n>\n> Randall Svancara\n> Systems Administrator/DBA/Developer\n> Main Bioinformatics Laboratory\n>\n>\n>\n> ----- Original Message -----\n> From: \"Tom Lane\"<[email protected]>\n> To: [email protected]\n> Cc: [email protected]\n> Sent: Monday, March 29, 2010 10:00:03 AM\n> Subject: Re: [PERFORM] Performance regarding LIKE searches\n>\n> [email protected] writes:\n>> I can see I am hitting an index using an index that I created using the varchar_pattern_ops setting. This is very fast and performs like I would expect. However, when my application, GBrowse, access the database, I see in my slow query log this:\n>\n>> 2010-03-29 09:34:38.083 PDT,\"gdr_gbrowse_live\",\"gdr_gbrowse_live\",11649,\"10.0.0.235:59043\",4bb0399d.2d81,8,\"SELECT\",2010-03-28 22:24:45 PDT,4/118607,0,LOG,00000,\"duration: 21467.467 ms execute dbdpg_p25965_9: SELECT f.id,f.object,f.typeid,f.seqid,f.start,f.end,f.strand\n>> FROM feature as f, name as n\n>> WHERE (n.id=f.id AND lower(n.name) LIKE $1)\n>\n>> \",\"parameters: $1 = 'Scaffold:scaffold\\_163:1000..1199%'\",,,,,,,\n>\n>> GBrowse is a perl based application. Looking at the duration for this query is around 21 seconds. That is a bit long. Does anyone have any ideas why the query duration is so different?\n>\n> You're not going to get an index optimization when the LIKE pattern\n> isn't a constant (and left-anchored, but this is).\n>\n> It is possible to get the planner to treat a query parameter as a\n> constant (implying a re-plan on each execution instead of having a\n> cached plan). I believe what you have to do at the moment is use\n> unnamed rather than named prepared statements. The practicality of\n> this would depend a lot on your client-side software stack, which\n> you didn't mention.\n>\n> \t\t\tregards, tom lane\n>\n\nI'm just going to guess, but DBD::Pg can do \"real prepare\" or \"fake \nprepare\".\n\nIt does \"real\" by default. Try setting:\n$dbh->{pg_server_prepare} = 0;\n\nbefore you prepare/run that statement and see if it makes a difference.\n\nhttp://search.cpan.org/dist/DBD-Pg/Pg.pm#prepare\n\n\n-Andy\n\n",
"msg_date": "Mon, 29 Mar 2010 12:36:04 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regarding LIKE searches"
},
{
"msg_contents": "On Mon, 29 Mar 2010, [email protected] wrote:\n> WHERE ... lower(n.name) LIKE 'Scaffold:scaffold_163:1000..1199%' ...\n\nI'm sure you noticed that this is never going to return any rows?\n\nMatthew\n\n-- \n Me... a skeptic? I trust you have proof?\n",
"msg_date": "Tue, 30 Mar 2010 10:34:55 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance regarding LIKE searches"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nWe're using PostgreSQL 8.2.\n\n \n\nI have a question in connection to this question posted by me earlier:\n\nhttp://archives.postgresql.org/pgsql-performance/2010-03/msg00343.php\n\n \n\nIn our application, DML operations (INSERT/UPDATE/DELETE) are heavily\nperformed in a day.\n\n \n\nI also read about pg_autovacuum & REINDEX at:\n\nhttp://www.postgresql.org/docs/8.2/interactive/routine-vacuuming.html\n\nhttp://www.postgresql.org/docs/8.2/static/sql-reindex.html\n\n \n\nI do not want to run pg_autovacuum daemon on a busy hour.\n\n \n\nIn case, if I can afford to take my database offline at low-usage time and\nperform REINDEX database-wide manually/linux cron, to boost up index\nperformance, what is the community answer/suggestion on the following:\n\n1. Is it a good idea to perform this on a daily basis?\n\n2. Any implications of doing this on a daily basis?\n\n3. Is there a way to find out bloated indexes?\n\n4. Any other maintenance command, like ANALYZE, that has to be executed\nbefore/after REINDEX?\n\n5. Is there a way to find out when REINDEX was last run on an\nINDEX/TABLE/DATABASE?\n\n \n\nNOTE: I've also seen from my past experience that REINDEX database-wide\ngreatly improves performance of the application.\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nWe're using PostgreSQL 8.2.\n \nI have a question in connection to this question posted by\nme earlier:\nhttp://archives.postgresql.org/pgsql-performance/2010-03/msg00343.php\n \nIn our application, DML operations (INSERT/UPDATE/DELETE)\nare heavily performed in a day.\n \nI also read about pg_autovacuum & REINDEX at:\nhttp://www.postgresql.org/docs/8.2/interactive/routine-vacuuming.html\nhttp://www.postgresql.org/docs/8.2/static/sql-reindex.html\n \nI do not want to run pg_autovacuum daemon on a busy hour.\n \nIn case, if I can afford to take my database offline at\nlow-usage time and perform REINDEX database-wide manually/linux cron, to boost\nup index performance, what is the community answer/suggestion on the\nfollowing:\n1. Is\nit a good idea to perform this on a daily basis?\n2. Any\nimplications of doing this on a daily basis?\n3. Is\nthere a way to find out bloated indexes?\n4. Any\nother maintenance command, like ANALYZE, that has to be executed before/after\nREINDEX?\n5. Is\nthere a way to find out when REINDEX was last run on an INDEX/TABLE/DATABASE?\n \nNOTE: I've also seen from my past experience that REINDEX\ndatabase-wide greatly improves performance of the application.",
"msg_date": "Tue, 30 Mar 2010 15:02:23 +0530",
"msg_from": "\"Gnanakumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "REINDEXing database-wide daily"
},
{
"msg_contents": "On 3/30/2010 4:32 AM, Gnanakumar wrote:\n> Hi,\n>\n> We're using PostgreSQL 8.2.\n>\n> I have a question in connection to this question posted by me earlier:\n>\n> http://archives.postgresql.org/pgsql-performance/2010-03/msg00343.php\n>\n> In our application, DML operations (INSERT/UPDATE/DELETE) are heavily\n> performed in a day.\n>\n> I also read about pg_autovacuum & REINDEX at:\n>\n> http://www.postgresql.org/docs/8.2/interactive/routine-vacuuming.html\n>\n> http://www.postgresql.org/docs/8.2/static/sql-reindex.html\n>\n> I do not want to run pg_autovacuum daemon on a busy hour.\n>\n> In case, if I can afford to take my database offline at low-usage time\n> and perform REINDEX database-wide manually/linux cron, to boost up index\n> performance, what is the community answer/suggestion on the following:\n>\n> 1. Is it a good idea to perform this on a daily basis?\n>\n> 2. Any implications of doing this on a daily basis?\n>\n> 3. Is there a way to find out bloated indexes?\n>\n> 4. Any other maintenance command, like ANALYZE, that has to be executed\n> before/after REINDEX?\n>\n> 5. Is there a way to find out when REINDEX was last run on an\n> INDEX/TABLE/DATABASE?\n>\n> NOTE: I've also seen from my past experience that REINDEX database-wide\n> greatly improves performance of the application.\n>\n\n\nI could be way off base here, so I hope others will confirm/deny this: \nI think the more often you run vacuum, the less you notice it. If you \nwait for too long then vacuum will have to work harder and you'll notice \na speed decrease. But many small vacuums which dont have as much work \nto do, you wont notice.\n\nIt could be, and I'm guessing again, because your database grew from 3 \nto 30 gig (if I recall the numbers right), REINDEX had lots of affect. \nBut if vacuum can keep up with space reuse, REINDEX may not be needed. \n(maybe a few weeks or once a month).\n\n-Andy\n\n\n",
"msg_date": "Tue, 30 Mar 2010 09:05:21 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEXing database-wide daily"
},
{
"msg_contents": "\"Gnanakumar\" <[email protected]> wrote:\n \n> We're using PostgreSQL 8.2.\n \nNewer versions have much improved the VACUUM and CLUSTER features. \nYou might want to consider upgrading to a later major version.\n \n> I have a question in connection to this question posted by me\n> earlier:\n> \n>\nhttp://archives.postgresql.org/pgsql-performance/2010-03/msg00343.php\n \nI hope that you have stopped using VACUUM FULL on a regular basis,\nbased on the responses to that post. The FULL option is only\nintended as a means to recover from extreme heap bloat when there is\nnot room for a CLUSTER. Any other use is going to cause problems. \nIf you continue to use it for other purposes, you may not get a lot\nof sympathy when you inevitably experience those problems.\n \n> I do not want to run pg_autovacuum daemon on a busy hour.\n \nYou would probably be surprised to see how much of a performance\nboost you can get during your busy times by having a properly\nconfigured autovacuum running. My initial reaction to seeing\nperformance degradation during autovacuum was to make it less\naggressive, which lead to increasing bloat between autovacuum runs,\nwhich degraded performance between runs and made things that much\nworse when autovacuum finally kicked in. It was only by using\naggressive maintenance to clean up the bloat and then configuring\nautovacuum to be much more aggressive that I saw performance during\npeak periods improve; although on some systems I had to introduce a\n10 ms vacuum cost delay.\n \nThis is one of those areas where your initial intuitions can be\ntotally counter-productive.\n \n> In case, if I can afford to take my database offline at low-usage\n> time and perform REINDEX database-wide manually/linux cron, to\n> boost up index performance, what is the community\n> answer/suggestion on the following:\n> \n> 1. Is it a good idea to perform this on a daily basis?\n \nNo. It is generally not something to run on a routine basis, and if\nyou're not careful you could make performance worse, by making the\nindexes so \"tight\" that most of your inserts or updates will cause\nindex page splits.\n \n> 2. Any implications of doing this on a daily basis?\n \nWe haven't found it necessary or useful, but if you have an\nappropriate fill factor, I suppose it might not actually do any\ndamage. There is some chance, based on your usage pattern, that a\ndaily CLUSTER of some tables might boost performance by reducing\nrandom access, but daily REINDEX is unlikely to be a win.\n \n> 3. Is there a way to find out bloated indexes?\n \nI don't have anything offhand, but you might poke around pg_class\nlooking at reltuples and relpages.\n \n> 4. Any other maintenance command, like ANALYZE, that has to be\n> executed before/after REINDEX?\n \nNot generally, but I seem to remember that there can be exceptions. \nIndexes on expressions? GIN?\n \n> 5. Is there a way to find out when REINDEX was last run on an\n> INDEX/TABLE/DATABASE?\n \nI don't think so.\n \n> NOTE: I've also seen from my past experience that REINDEX\n> database-wide greatly improves performance of the application.\n \nI don't doubt that; if you've been shooting yourself in the foot by\nrunning VACUUM FULL, then REINDEX would be a good bandage to\nalleviate the pain.\n \nMy suggestion is to clean up your existing bloat by running CLUSTER\non all tables, configure autovacuum to aggressive values similar to\nwhat you see in 8.3 or 8.4 and turn it on, run a nightly VACUUM\nANALYZE VERBOSE of the database and review the output to make sure\nyour fsm settings are adequate and to monitor bloat, and eliminate\nall use of VACUUM FULL or REINDEX unless you've somehow slipped up\nand allowed extreme bloat. This will allow tables and indexes to\n\"settle in\" to an efficient size where they are not constantly\ngiving up disk space to the OS at night and then having to reacquire\nit from the OS when under heavy load during the day.\n \n-Kevin\n",
"msg_date": "Tue, 30 Mar 2010 10:05:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REINDEXing database-wide daily"
}
] |
[
{
"msg_contents": "Hi!\nWe have a postgres database which accessed by clients app via PL/PGSQL\nstored procedures ( 8.4.1 on x86_64 ubuntu 8.04 server).\n\nFor some reasons we use about 25 temp tables \"on commit delete rows\".\nIt widely used by our SP.\n\nWhen temp tables with \"on commit delete rows\" exists, I can see a\nstrange delay at any “begin” and “commit”.\n\n2010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 20.809 ms\nstatement: BEGIN\n2010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 0.809 ms\nstatement: SELECT empl.BL_CustomerFreeCLGet('384154676925391',\n'8189', NULL)\n010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 0.283 ms\nstatement: FETCH ALL IN \"<unnamed portal 165>\"; --\n+++empl.BL_CustomerFreeCLGet+++<<21360>>\n2010-03-09 15:14:01 MSK logrus 32102 amber LOG: duration: 19.895 ms\nstatement: COMMIT\n\nThe more system load and more temp table quantity in session, then\nmore “begin” and “commit” delays.\n\n\nTest example below:\n\ncreate database test;\ncreate language plpgsql;\nCREATE OR REPLACE FUNCTION test_connectionprepare(in_create\nbool,in_IsTemp bool,in_DelOnCommit bool,in_TableCount int)\n RETURNS boolean AS $$\n\ndeclare\n m_count int := 50;\n m_isTemp bool;\n\nbegin\n\nm_count := coalesce(in_TableCount,m_count);\n\nFOR i IN 0..m_count LOOP\n\nif in_create then\n execute 'create ' || case when in_IsTemp then ' temp ' else ' ' end\n||' table tmp_table_'\n || i::text || '(id int,pid int,name text) '\n || case when in_DelOnCommit then ' on commit delete rows\n' else ' ' end || ';';\nelse\n execute 'drop table if exists tmp_table_' || i::text ||';';\nend if;\n\nEND LOOP;\n\n return in_create;\nend;\n$$ LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER;\n------------------------------------------------------------------------------\n\nNow run pgScript:\nDECLARE @I;\nSET @I = 1;\nWHILE @I <= 100\nBEGIN\n\nselect now();\n\n SET @I = @I + 1;\nEND\n\nIt spent about 2200-2300 ms on my server.\n\nLet's create 50 temp tables: select test_connectionprepare(true,true,true,100);\n\nand run script again.\n\nWe can see 2-3 times slowing!\n\nHere temp tables quantity vs test run time:\n\n0 - 2157-2187\n10 - 2500-2704\n50 - 5900-6000\n100 - 7900-8000\n500 - 43000+\n\nI can to suppose, that all tables are truncated before and after every\ntransactions. Very strange method for read only transactions!\n------------------------------------------------------------------------------\n\nSorry for my English.\n\nMy server info:\n\"PostgreSQL 8.4.1 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.2.real\n(GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu4), 64-bit\"\nLinux u16 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux\n4xOpteron 16 processor cores.\n",
"msg_date": "Tue, 30 Mar 2010 15:46:26 +0400",
"msg_from": "Artiom Makarov <[email protected]>",
"msg_from_op": true,
"msg_subject": "temp table \"on commit delete rows\": transaction overhead"
},
{
"msg_contents": "Artiom Makarov <[email protected]> writes:\n> When temp tables with \"on commit delete rows\" exists, I can see a\n> strange delay at any �begin� and �commit�.\n\nA delay at commit is hardly surprising, because each such temp table\nrequires filesystem operations at commit (basically an \"ftruncate\").\nI don't recall any operations at transaction start for such tables,\nbut there may be some.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Mar 2010 13:50:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temp table \"on commit delete rows\": transaction overhead "
},
{
"msg_contents": "2010/3/30 Tom Lane <[email protected]>:\n\n> I don't recall any operations at transaction start for such tables,\n> but there may be some.\n>\nBoth in СommitTransaction(void) and PrepareTransaction(void) we can\nsee PreCommit_on_commit_actions() call;\n\nHere PreCommit_on_commit_actions()\n<CUT>\n\t\t\tcase ONCOMMIT_DELETE_ROWS:\n\t\t\t\toids_to_truncate = lappend_oid(oids_to_truncate, oc->relid);\n\t\t\t\tbreak;\n<CUT>\n\nAs a my opinion, the same action taked place at transaction start and commit.\n\nTo truncate rows of any temp tables, both changed and unchanged(!)\nafter transaction looks as little reinsurance. Well.\nBut why do the same action _before_ any transaction?\n",
"msg_date": "Wed, 31 Mar 2010 11:42:29 +0400",
"msg_from": "Artiom Makarov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temp table \"on commit delete rows\": transaction\n\toverhead"
},
{
"msg_contents": "Tom Lane wrote:\n> Artiom Makarov <[email protected]> writes:\n> > When temp tables with \"on commit delete rows\" exists, I can see a\n> > strange delay at any �begin� and �commit�.\n> \n> A delay at commit is hardly surprising, because each such temp table\n> requires filesystem operations at commit (basically an \"ftruncate\").\n> I don't recall any operations at transaction start for such tables,\n> but there may be some.\n\nI think one of the problems is that we do the truncate even if the table\nhas not be touched by the query, which is poor behavior.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n",
"msg_date": "Wed, 31 Mar 2010 20:44:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temp table \"on commit delete rows\": transaction\n overhead"
},
{
"msg_contents": "2010/4/1 Bruce Momjian <[email protected]>:\n\n> I think one of the problems is that we do the truncate even if the table\n> has not be touched by the query, which is poor behavior.\n\nThank you for the support.\nWill be this problem registered?\n\nPS\nI see a workaround: switch off \"on commit delete rows\" on temp tables\nand use txid_current() to control transaction visibility.\n",
"msg_date": "Mon, 5 Apr 2010 11:56:54 +0400",
"msg_from": "Artiom Makarov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temp table \"on commit delete rows\": transaction\n\toverhead"
},
{
"msg_contents": "Artiom Makarov wrote:\n> 2010/4/1 Bruce Momjian <[email protected]>:\n> \n> > I think one of the problems is that we do the truncate even if the table\n> > has not be touched by the query, which is poor behavior.\n> \n> Thank you for the support.\n> Will be this problem registered?\n\nI have it on my personal TODO and will try to get it on the official\nTODO soon.\n\n> I see a workaround: switch off \"on commit delete rows\" on temp tables\n> and use txid_current() to control transaction visibility.\n\nOK.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n",
"msg_date": "Mon, 5 Apr 2010 08:09:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temp table \"on commit delete rows\": transaction\n overhead"
}
] |
[
{
"msg_contents": "Hello,\n\nI am waiting for an ordered machine dedicated to PostgresSQL. It was \nexpected to have 3ware 9650SE 16 port controller. However, the vendor \nwants to replace this controller with MegaRAID SAS 84016E, because, as \nthey say, they have it on stock, while 3ware would be available in a few \nweeks.\n\nIs this a good replace, generally?\nWill it run on FreeBSD, specifically?\n\nThanks\nIrek.\n",
"msg_date": "Tue, 30 Mar 2010 15:20:47 +0200",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": true,
"msg_subject": "3ware vs. MegaRAID"
},
{
"msg_contents": "Hi,\n\n> I am waiting for an ordered machine dedicated to PostgresSQL. It was\n> expected to have 3ware 9650SE 16 port controller. However, the vendor\n> wants to replace this controller with MegaRAID SAS 84016E, because, as\n> they say, they have it on stock, while 3ware would be available in a few\n> weeks.\n>\n> Is this a good replace, generally?\n> Will it run on FreeBSD, specifically?\n\nNot sure about that specific controller, but I do have a Fujitsu \nrebranded \"RAID Ctrl SAS onboard 256MB iTBBU LSI\" that works pretty good \non my FreeBSD 6.2 box with the mfi driver.\n\nGetting the megacli tool took some effort as it involves having Linux \nemulation running but it's now working fine. I wouldn't dare to use it \nfor write operations as I remember it freezing the box just after \nupgrading to amd64 (it was working good on i386).\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Tue, 30 Mar 2010 18:30:39 +0200",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "Ireneusz Pluta wrote:\n> I am waiting for an ordered machine dedicated to PostgresSQL. It was \n> expected to have 3ware 9650SE 16 port controller. However, the vendor \n> wants to replace this controller with MegaRAID SAS 84016E, because, as \n> they say, they have it on stock, while 3ware would be available in a \n> few weeks.\n>\n> Is this a good replace, generally?\n> Will it run on FreeBSD, specifically?\n\nThe MFI driver needed to support that MegaRAID card has been around \nsince FreeBSD 6.1: http://oldschoolpunx.net/phpMan.php/man/mfi/4\n\nThe MegaRAID SAS 84* cards have worked extremely well for me in terms of \nperformance and features for all the systems I've seen them installed \nin. I'd consider it a modest upgrade from that 3ware card, speed wise. \nThe main issue with the MegaRAID cards is that you will have to write a \nlot of your own custom scripts to monitor for failures using their \npainful MegaCLI utility, and under FreeBSD that also requires using \ntheir Linux utility via emulation: \nhttp://www.freebsdsoftware.org/sysutils/linux-megacli.html\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Tue, 30 Mar 2010 13:18:12 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "On 30/03/2010 19:18, Greg Smith wrote:\n> The MegaRAID SAS 84* cards have worked extremely well for me in terms of\n> performance and features for all the systems I've seen them installed\n> in. I'd consider it a modest upgrade from that 3ware card, speed wise.\n> The main issue with the MegaRAID cards is that you will have to write a\n> lot of your own custom scripts to monitor for failures using their\n> painful MegaCLI utility, and under FreeBSD that also requires using\n> their Linux utility via emulation:\n> http://www.freebsdsoftware.org/sysutils/linux-megacli.html\n\nGetting MegaCLI to work was a slight PITA, but once it was running it's \nbeen just a matter of adding:\n\ndaily_status_mfi_raid_enable=\"YES\"\n\nto /etc/periodic.conf to get the following data in the daily reports:\n\nAdpater: 0\n------------------------------------------------------------------------\nPhysical Drive Information:\nENC SLO DEV SEQ MEC OEC PFC LPF STATE\n1 0 0 2 0 0 0 0 Online\n1 1 1 2 0 0 0 0 Online\n1 2 2 2 0 0 0 0 Online\n1 3 3 2 0 0 0 0 Online\n1 4 4 2 0 0 0 0 Online\n1 5 5 2 0 0 0 0 Online\n1 255 248 0 0 0 0 0 Unconfigured(good)\n\nVirtual Drive Information:\nVD DRV RLP RLS RLQ STS SIZE STATE NAME\n0 2 1 0 0 64kB 69472MB Optimal\n1 2 1 3 0 64kB 138944MB Optimal\n\nBBU Information:\nTYPE TEMP OK RSOC ASOC RC CC ME\niTBBU 29 C -1 94 93 816 109 2\n\nController Logs:\n\n\n+++ /var/log/mfi_raid_0.today\tSun Mar 28 03:07:36 2010\n@@ -37797,3 +37797,25 @@\n Event Description: Patrol Read complete\n Event Data:\n \tNone\n+\n+========================================================================\n+seqNum: 0x000036f6\n+Time: Sat Mar 27 03:00:00 2010\n+\n+Code: 0x00000027\n+Class: 0\n+Locale: 0x20\n+Event Description: Patrol Read started\n+Event Data:\n+\tNone\n\netc...\n\n\nCheers\n-- \nMatteo Beccati\n\nDevelopment & Consulting - http://www.beccati.com/\n",
"msg_date": "Wed, 31 Mar 2010 14:43:18 +0200",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "Ireneusz Pluta writes:\n\n> I am waiting for an ordered machine dedicated to PostgresSQL. It was \n> expected to have 3ware 9650SE 16 port controller. However, the vendor \n> wants to replace this controller with MegaRAID SAS 84016E, because, as \n\nI have had better luck getting 3ware management tools to work on both \nFreeBSD and Linux than the Megaraid cards.\n\nI also like the 3ware cards can be configured to send out an email in case \nof problems once you have the monitoring program running.\n",
"msg_date": "Wed, 31 Mar 2010 12:59:52 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "Greg Smith pisze:\n>\n> The MegaRAID SAS 84* cards have worked extremely well for me in terms \n> of performance and features for all the systems I've seen them \n> installed in. I'd consider it a modest upgrade from that 3ware card, \n> speed wise. \nOK, sounds promising.\n> The main issue with the MegaRAID cards is that you will have to write \n> a lot of your own custom scripts to monitor for failures using their \n> painful MegaCLI utility, and under FreeBSD that also requires using \n> their Linux utility via emulation: \n> http://www.freebsdsoftware.org/sysutils/linux-megacli.html\n>\nAnd this is what worries me, as I prefer not to play with utilities too \nmuch, but put the hardware into production, instead. So I'd like to find \nmore precisely if expected speed boost would pay enough for that pain. \nLet me ask the following way then, if such a question makes much sense \nwith the data I provide. I already have another box with 3ware \n9650SE-16ML. With the array configured as follows:\nRAID-10, 14 x 500GB Seagate ST3500320NS, stripe size 256K, 16GB RAM, \nXeon X5355, write caching enabled, BBU, FreeBSD 7.2, ufs,\nwhen testing with bonnie++ on idle machine, I got sequential block \nread/write around 320MB/290MB and random seeks around 660.\n\nWould that result be substantially better with LSI MegaRAID?\n\n",
"msg_date": "Tue, 06 Apr 2010 18:49:48 +0200",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "\nOn Apr 6, 2010, at 9:49 AM, Ireneusz Pluta wrote:\n\n> Greg Smith pisze:\n>> \n>> The MegaRAID SAS 84* cards have worked extremely well for me in terms \n>> of performance and features for all the systems I've seen them \n>> installed in. I'd consider it a modest upgrade from that 3ware card, \n>> speed wise. \n> OK, sounds promising.\n>> The main issue with the MegaRAID cards is that you will have to write \n>> a lot of your own custom scripts to monitor for failures using their \n>> painful MegaCLI utility, and under FreeBSD that also requires using \n>> their Linux utility via emulation: \n>> http://www.freebsdsoftware.org/sysutils/linux-megacli.html\n>> \n> And this is what worries me, as I prefer not to play with utilities too \n> much, but put the hardware into production, instead. So I'd like to find \n> more precisely if expected speed boost would pay enough for that pain. \n> Let me ask the following way then, if such a question makes much sense \n> with the data I provide. I already have another box with 3ware \n> 9650SE-16ML. With the array configured as follows:\n> RAID-10, 14 x 500GB Seagate ST3500320NS, stripe size 256K, 16GB RAM, \n> Xeon X5355, write caching enabled, BBU, FreeBSD 7.2, ufs,\n> when testing with bonnie++ on idle machine, I got sequential block \n> read/write around 320MB/290MB and random seeks around 660.\n> \n> Would that result be substantially better with LSI MegaRAID?\n> \n\nMy experiences with the 3ware 9650 on linux are similar -- horribly slow for some reason with raid 10 on larger arrays.\n\nOthers have claimed this card performs well on FreeBSD, but the above looks just as bad as Linux.\n660 iops is slow for 14 spindles of any type, although the raid 10 on might limit it to an effective 7 spindles on reading in which case its OK -- but should still top 100 iops per effective disk on 7200rpm drives unless the effective concurrency of the benchmark is low. My experience with the 9650 was that iops was OK, but sequential performance for raid 10 was very poor.\n\nOn linux, I was able to get better sequential read performance like this:\n\n* set it up as 3 raid 10 blocks, each 4 drives (2 others spare or for xlog or something). Software RAID-0 these RAID 10 chunks together in the OS.\n* Change the linux 'readahead' block device parameter to at least 4MB (8192, see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.\n\nA better raid card you should hit at minimum 800, if not 1000, MB/sec + depending on\nwhether you bottleneck on your PCIe or SATA ports or not. I switched to two adaptec 5xx5 series cards (each with half the disks, software raid-0 between them) to get about 1200MB/sec max throughput and 2000iops from two sets of 10 Seagate STxxxxxxxNS 1TB drives. That is still not as good as it should be, but much better. FWIW, one set of 8 drives in raid 10 on the adaptec did about 750MB/sec sequential and ~950 iops read. It required XFS to do this, ext3 was 20% slower in throughput.\nA PERC 6 card (LSI MegaRaid clone) performed somewhere between the two.\n\n\nI don't like bonnie++ much, its OK at single drive tests but not as good at larger arrays. If you have time try fio, and create some custom profiles.\nLastly, for these sorts of tests partition your array in smaller chunks so that you can reliably test the front or back of the drive. Sequential speed at the front of a typical 3.5\" drive is about 2x as fast as at the end of the drive. \n\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 7 Apr 2010 20:29:53 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "For a card level RAID controller, I am a big fan of the LSI 8888, which is\navailable in a PCIe riser form factor for blade / 1U servers, and comes with\n0.5GB of battery backed cache. Full Linux support including mainline kernel\ndrivers and command line config tools. Was using these with SAS expanders\nand 48x 1TB SATA-300 spindles per card, and it was pretty (adjective) quick\nfor a card-based system ... comparable with a small FC-AL EMC Clariion CX3\nseries in fact, just without the redundancy.\n\nMy only gripe is that as of 18 months ago, it did not support triples\n(RAID-10 with 3 drives per set instead of 2) ... I had a \"little knowledge\nis a dangerous thing\" client who was stars-in-the-eyes sold on RAID-6 and so\nwanted double drive failure protection for everything (and didn't get my\nexplanation about how archive logs on other LUNs make this OK, or why\nRAID-5/6 sucks for a database, or really listen to anything I said :-) ...\nIt would do RAID-10 quads however (weird...).\n\nAlso decent in the Dell OEM'ed version (don't know the Dell PERC model\nnumber) though they tend to be a bit behind on firmware.\n\nMegaCLI isn't the slickest tool, but you can find Nagios scripts for it\nonline ... what's the problem? The Clariion will send you (and EMC support)\nan email if it loses a drive, but I'm not sure that's worth the 1500% price\ndifference ;-)\n\nCheers\nDave\n\nOn Wed, Apr 7, 2010 at 10:29 PM, Scott Carey <[email protected]>wrote:\n\n>\n> On Apr 6, 2010, at 9:49 AM, Ireneusz Pluta wrote:\n>\n> > Greg Smith pisze:\n> >>\n> >> The MegaRAID SAS 84* cards have worked extremely well for me in terms\n> >> of performance and features for all the systems I've seen them\n> >> installed in. I'd consider it a modest upgrade from that 3ware card,\n> >> speed wise.\n> > OK, sounds promising.\n> >> The main issue with the MegaRAID cards is that you will have to write\n> >> a lot of your own custom scripts to monitor for failures using their\n> >> painful MegaCLI utility, and under FreeBSD that also requires using\n> >> their Linux utility via emulation:\n> >> http://www.freebsdsoftware.org/sysutils/linux-megacli.html\n> >>\n> > And this is what worries me, as I prefer not to play with utilities too\n> > much, but put the hardware into production, instead. So I'd like to find\n> > more precisely if expected speed boost would pay enough for that pain.\n> > Let me ask the following way then, if such a question makes much sense\n> > with the data I provide. I already have another box with 3ware\n> > 9650SE-16ML. With the array configured as follows:\n> > RAID-10, 14 x 500GB Seagate ST3500320NS, stripe size 256K, 16GB RAM,\n> > Xeon X5355, write caching enabled, BBU, FreeBSD 7.2, ufs,\n> > when testing with bonnie++ on idle machine, I got sequential block\n> > read/write around 320MB/290MB and random seeks around 660.\n> >\n> > Would that result be substantially better with LSI MegaRAID?\n> >\n>\n> My experiences with the 3ware 9650 on linux are similar -- horribly slow\n> for some reason with raid 10 on larger arrays.\n>\n> Others have claimed this card performs well on FreeBSD, but the above looks\n> just as bad as Linux.\n> 660 iops is slow for 14 spindles of any type, although the raid 10 on might\n> limit it to an effective 7 spindles on reading in which case its OK -- but\n> should still top 100 iops per effective disk on 7200rpm drives unless the\n> effective concurrency of the benchmark is low. My experience with the 9650\n> was that iops was OK, but sequential performance for raid 10 was very poor.\n>\n> On linux, I was able to get better sequential read performance like this:\n>\n> * set it up as 3 raid 10 blocks, each 4 drives (2 others spare or for xlog\n> or something). Software RAID-0 these RAID 10 chunks together in the OS.\n> * Change the linux 'readahead' block device parameter to at least 4MB\n> (8192, see blockdev --setra) -- I don't know if there is a FreeBSD\n> equivalent.\n>\n> A better raid card you should hit at minimum 800, if not 1000, MB/sec +\n> depending on\n> whether you bottleneck on your PCIe or SATA ports or not. I switched to\n> two adaptec 5xx5 series cards (each with half the disks, software raid-0\n> between them) to get about 1200MB/sec max throughput and 2000iops from two\n> sets of 10 Seagate STxxxxxxxNS 1TB drives. That is still not as good as it\n> should be, but much better. FWIW, one set of 8 drives in raid 10 on the\n> adaptec did about 750MB/sec sequential and ~950 iops read. It required XFS\n> to do this, ext3 was 20% slower in throughput.\n> A PERC 6 card (LSI MegaRaid clone) performed somewhere between the two.\n>\n>\n> I don't like bonnie++ much, its OK at single drive tests but not as good at\n> larger arrays. If you have time try fio, and create some custom profiles.\n> Lastly, for these sorts of tests partition your array in smaller chunks so\n> that you can reliably test the front or back of the drive. Sequential speed\n> at the front of a typical 3.5\" drive is about 2x as fast as at the end of\n> the drive.\n>\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nFor a card level RAID controller, I am a big fan of the LSI 8888, which is available in a PCIe riser form factor for blade / 1U servers, and comes with 0.5GB of battery backed cache. Full Linux support including mainline kernel drivers and command line config tools. Was using these with SAS expanders and 48x 1TB SATA-300 spindles per card, and it was pretty (adjective) quick for a card-based system ... comparable with a small FC-AL EMC Clariion CX3 series in fact, just without the redundancy.\nMy only gripe is that as of 18 months ago, it did not support triples (RAID-10 with 3 drives per set instead of 2) ... I had a \"little knowledge is a dangerous thing\" client who was stars-in-the-eyes sold on RAID-6 and so wanted double drive failure protection for everything (and didn't get my explanation about how archive logs on other LUNs make this OK, or why RAID-5/6 sucks for a database, or really listen to anything I said :-) ... It would do RAID-10 quads however (weird...).\nAlso decent in the Dell OEM'ed version (don't know the Dell PERC model number) though they tend to be a bit behind on firmware.MegaCLI isn't the slickest tool, but you can find Nagios scripts for it online ... what's the problem? The Clariion will send you (and EMC support) an email if it loses a drive, but I'm not sure that's worth the 1500% price difference ;-) \nCheersDaveOn Wed, Apr 7, 2010 at 10:29 PM, Scott Carey <[email protected]> wrote:\n\nOn Apr 6, 2010, at 9:49 AM, Ireneusz Pluta wrote:\n\n> Greg Smith pisze:\n>>\n>> The MegaRAID SAS 84* cards have worked extremely well for me in terms\n>> of performance and features for all the systems I've seen them\n>> installed in. I'd consider it a modest upgrade from that 3ware card,\n>> speed wise.\n> OK, sounds promising.\n>> The main issue with the MegaRAID cards is that you will have to write\n>> a lot of your own custom scripts to monitor for failures using their\n>> painful MegaCLI utility, and under FreeBSD that also requires using\n>> their Linux utility via emulation:\n>> http://www.freebsdsoftware.org/sysutils/linux-megacli.html\n>>\n> And this is what worries me, as I prefer not to play with utilities too\n> much, but put the hardware into production, instead. So I'd like to find\n> more precisely if expected speed boost would pay enough for that pain.\n> Let me ask the following way then, if such a question makes much sense\n> with the data I provide. I already have another box with 3ware\n> 9650SE-16ML. With the array configured as follows:\n> RAID-10, 14 x 500GB Seagate ST3500320NS, stripe size 256K, 16GB RAM,\n> Xeon X5355, write caching enabled, BBU, FreeBSD 7.2, ufs,\n> when testing with bonnie++ on idle machine, I got sequential block\n> read/write around 320MB/290MB and random seeks around 660.\n>\n> Would that result be substantially better with LSI MegaRAID?\n>\n\nMy experiences with the 3ware 9650 on linux are similar -- horribly slow for some reason with raid 10 on larger arrays.\n\nOthers have claimed this card performs well on FreeBSD, but the above looks just as bad as Linux.\n660 iops is slow for 14 spindles of any type, although the raid 10 on might limit it to an effective 7 spindles on reading in which case its OK -- but should still top 100 iops per effective disk on 7200rpm drives unless the effective concurrency of the benchmark is low. My experience with the 9650 was that iops was OK, but sequential performance for raid 10 was very poor.\n\nOn linux, I was able to get better sequential read performance like this:\n\n* set it up as 3 raid 10 blocks, each 4 drives (2 others spare or for xlog or something). Software RAID-0 these RAID 10 chunks together in the OS.\n* Change the linux 'readahead' block device parameter to at least 4MB (8192, see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.\n\nA better raid card you should hit at minimum 800, if not 1000, MB/sec + depending on\nwhether you bottleneck on your PCIe or SATA ports or not. I switched to two adaptec 5xx5 series cards (each with half the disks, software raid-0 between them) to get about 1200MB/sec max throughput and 2000iops from two sets of 10 Seagate STxxxxxxxNS 1TB drives. That is still not as good as it should be, but much better. FWIW, one set of 8 drives in raid 10 on the adaptec did about 750MB/sec sequential and ~950 iops read. It required XFS to do this, ext3 was 20% slower in throughput.\n\nA PERC 6 card (LSI MegaRaid clone) performed somewhere between the two.\n\n\nI don't like bonnie++ much, its OK at single drive tests but not as good at larger arrays. If you have time try fio, and create some custom profiles.\nLastly, for these sorts of tests partition your array in smaller chunks so that you can reliably test the front or back of the drive. Sequential speed at the front of a typical 3.5\" drive is about 2x as fast as at the end of the drive.\n\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 7 Apr 2010 22:44:09 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "Scott Carey wrote:\n> * Change the linux 'readahead' block device parameter to at least 4MB (8192, see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.\n> \nI haven't tested them, but 3ware gives suggestions at \nhttp://www.3ware.com/kb/Article.aspx?id=14852 for tuning their cards \nproperly under FreeBSD. You cannot get good sequential read performance \nfrom 3ware's cards without doing something about this at the OS level; \nthe read-ahead on the card itself is minimal and certainly a bottleneck.\n\nAs for your comments about drives being faster at the front than the \nend, the zcav tool that comes with bonnie++ is a good way to plot that \nout, rather than having to split partitions up and do a bunch of manual \ntesting.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 08 Apr 2010 02:13:32 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "\nOn Apr 7, 2010, at 11:13 PM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> * Change the linux 'readahead' block device parameter to at least 4MB (8192, see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.\n>> \n> I haven't tested them, but 3ware gives suggestions at \n> http://www.3ware.com/kb/Article.aspx?id=14852 for tuning their cards \n> properly under FreeBSD. You cannot get good sequential read performance \n> from 3ware's cards without doing something about this at the OS level; \n> the read-ahead on the card itself is minimal and certainly a bottleneck.\n> \n> As for your comments about drives being faster at the front than the \n> end, the zcav tool that comes with bonnie++ is a good way to plot that \n> out, rather than having to split partitions up and do a bunch of manual \n> testing.\n\nThere's an FIO script that does something similar.\nWhat I'm suggesting is that if you want to test a file system (or compare it to others), and you want to get consistent results then run those tests on a smaller slice of the drive. To tune a RAID card, there is not much point other than trying out the fast part of the drive, if it can keep up on the fast part, it should be able to keep up on the slow part. I'm would not suggest splitting the drive up into chunks and doing many manual tests.\n\n3.5\" drives are a bit more than 50% the sequential throughput at the end than the start. 2.5\" drives are a bit less than 65% the sequential throughput at the end than the start. I haven't seen any significant variation of that rule on any benchmark I've run, or I've seen online for 'standard' drives. Occasionally there is a drive that doesn't use all its space and is a bit faster at the end.\n\nMy typical practice is to use the first 70% to 80% of a large volume for the main data, and use the slowest last chunk for archives and backups.\n\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n",
"msg_date": "Thu, 8 Apr 2010 20:28:27 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "On 2010-04-08 05:44, Dave Crooke wrote:\n> For a card level RAID controller, I am a big fan of the LSI 8888, which is\n> available in a PCIe riser form factor for blade / 1U servers, and comes with\n> 0.5GB of battery backed cache. Full Linux support including mainline kernel\n> drivers and command line config tools. Was using these with SAS expanders\n> and 48x 1TB SATA-300 spindles per card, and it was pretty (adjective) quick\n> for a card-based system ... comparable with a small FC-AL EMC Clariion CX3\n> series in fact, just without the redundancy.\n> \n\nCan someone shed \"simple\" light on an extremely simple question.\nHow do you physicallly get 48 drives attached to an LSI that claims to\nonly have 2 internal and 2 external ports?\n(the controller claims to support up to 240 drives).\n\nI'm currently looking at getting a server with space for 8 x 512GB SSDs\nrunning raid5 (or 6) and are looking for an well performing controller\nwith BBWC for the setup. So I was looking for something like the LSI888ELP.\n\n-- \nJesper\n",
"msg_date": "Fri, 09 Apr 2010 08:32:38 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "Jesper Krogh wrote:\n> Can someone shed \"simple\" light on an extremely simple question.\n> How do you physicallly get 48 drives attached to an LSI that claims to\n> only have 2 internal and 2 external ports?\n> (the controller claims to support up to 240 drives).\n\nThere are these magic boxes that add \"SAS expansion\", which basically \nsplits a single port so you can connect more drives to it. An example \nfrom a vendor some of the regulars on this list like is \nhttp://www.aberdeeninc.com/abcatg/kitjbod-1003.htm\n\nYou normally can't buy these except as part of an integrated drive \nchassis subsystem. If you get one that has an additional pass-through \nport, that's how you can stack these into multiple layers and hit really \nlarge numbers of disks.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 09 Apr 2010 11:27:54 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "On 2010-04-09 17:27, Greg Smith wrote:\n> Jesper Krogh wrote:\n>> Can someone shed \"simple\" light on an extremely simple question.\n>> How do you physicallly get 48 drives attached to an LSI that claims to\n>> only have 2 internal and 2 external ports?\n>> (the controller claims to support up to 240 drives).\n>\n> There are these magic boxes that add \"SAS expansion\", which basically \n> splits a single port so you can connect more drives to it. An example \n> from a vendor some of the regulars on this list like is \n> http://www.aberdeeninc.com/abcatg/kitjbod-1003.htm\n>\n> You normally can't buy these except as part of an integrated drive \n> chassis subsystem. If you get one that has an additional pass-through \n> port, that's how you can stack these into multiple layers and hit \n> really large numbers of disks.\n\nI've spent quite some hours googling today. Am I totally wrong if the:\nHP MSA-20/30/70 and Sun Oracle J4200's:\nhttps://shop.sun.com/store/product/53a01251-2fce-11dc-9482-080020a9ed93\nare of the same type just from \"major\" vendors.\n\nThat would enable me to reuse the existing server and moving to something\nlike Intel's X25-M 160GB disks with just a higher amount (25) in a MSA-70.\n\n-- \nJesper .. that's beginning to look like a decent plan.\n",
"msg_date": "Fri, 09 Apr 2010 18:11:34 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "Jesper Krogh wrote:\n> I've spent quite some hours googling today. Am I totally wrong if the:\n> HP MSA-20/30/70 and Sun Oracle J4200's:\n> https://shop.sun.com/store/product/53a01251-2fce-11dc-9482-080020a9ed93\n> are of the same type just from \"major\" vendors.\n\nYes, those are the same type of implementation. Every vendor has their \nown preferred way to handle port expansion, and most are somewhat scared \nabout discussing the whole thing now because EMC has a ridiculous patent \non the whole idea[1]. They all work the same from the user perspective, \nalbeit sometimes with their own particular daisy chaining rules.\n\n> That would enable me to reuse the existing server and moving to something\n> like Intel's X25-M 160GB disks with just a higher amount (25) in a \n> MSA-70.\n\nI guess, but note that several of us here consider Intel's SSDs \nunsuitable for critical database use. There are some rare but not \nimpossible to encounter problems with its write caching implementation \nthat leave you exposed to database corruption if there's a nasty power \ninterruption. Can't get rid of the problem without destroying both \nperformance and longevity of the drive[2][3]. If you're going to deploy \nsomething using those drives, please make sure you're using an \naggressive real-time backup scheme such as log shipping in order to \nminimize your chance of catastrophic data loss.\n\n[1] http://www.freepatentsonline.com/7624206.html\n[2] \nhttp://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\n[3] \nhttp://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 09 Apr 2010 14:22:20 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "On 2010-04-09 20:22, Greg Smith wrote:\n> Jesper Krogh wrote:\n>> I've spent quite some hours googling today. Am I totally wrong if the:\n>> HP MSA-20/30/70 and Sun Oracle J4200's:\n>> https://shop.sun.com/store/product/53a01251-2fce-11dc-9482-080020a9ed93\n>> are of the same type just from \"major\" vendors.\n>\n> Yes, those are the same type of implementation. Every vendor has \n> their own preferred way to handle port expansion, and most are \n> somewhat scared about discussing the whole thing now because EMC has a \n> ridiculous patent on the whole idea[1]. They all work the same from \n> the user perspective, albeit sometimes with their own particular daisy \n> chaining rules.\n>\n>> That would enable me to reuse the existing server and moving to \n>> something\n>> like Intel's X25-M 160GB disks with just a higher amount (25) in a \n>> MSA-70.\n>\n> I guess, but note that several of us here consider Intel's SSDs \n> unsuitable for critical database use. There are some rare but not \n> impossible to encounter problems with its write caching implementation \n> that leave you exposed to database corruption if there's a nasty power \n> interruption. Can't get rid of the problem without destroying both \n> performance and longevity of the drive[2][3]. If you're going to \n> deploy something using those drives, please make sure you're using an \n> aggressive real-time backup scheme such as log shipping in order to \n> minimize your chance of catastrophic data loss.\n>\n> [1] http://www.freepatentsonline.com/7624206.html\n> [2] \n> http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/ \n>\n> [3] \n> http://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html \n>\n\nThere are some things in my scenario... that cannot be said to be\ngeneral in all database situations.\n\nHaving to go a week back (backup) is \"not really a problem\", so as\nlong as i have a reliable backup and the problems doesnt occour except from\nunexpected poweroffs then I think I can handle it.\nAnother thing is that the overall usage is far dominated by random-reads,\nwhich is the performance I dont ruin by disabling write-caching.\n\nAnd by adding a 512/1024MB BBWC on the controller I bet I can \"re-gain\"\nenough write performance to easily make the system funcition. Currently\nthe average writeout is way less than 10MB/s but the reading processes\nall spends most of their time in iowait.\n\nSince my application is dominated by by random reads I \"think\" that\nI still should have a huge gain over regular SAS drives on that side\nof the equation, but most likely not on the write-side. But all of this is\nso far only speculations, since the vendors doesnt seem eager on\nlending out stuff these day, so everything is only on paper so far.\n\nThere seem to be consensus that on the write-side, SAS-disks can\nfairly easy outperform SSDs. I have not seen anything showing that\nthey dont still have huge benefits on the read-side.\n\nIt would be nice if there was an easy way to test and confirm that it\nactually was robust to power-outtake..\n\n\n\n\n.. just having a disk-array with build-in-battery for the SSDs would\nsolve the problem.\n\n-- \nJesper\n",
"msg_date": "Fri, 09 Apr 2010 21:02:54 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
},
{
"msg_contents": "On Fri, Apr 9, 2010 at 1:02 PM, Jesper Krogh <[email protected]> wrote:\n> It would be nice if there was an easy way to test and confirm that it\n> actually was robust to power-outtake..\n\nSadly, the only real test is pulling the power plug. And it can't\nprove the setup is good, only that it's bad or most likely good.\n\n> .. just having a disk-array with build-in-battery for the SSDs would\n> solve the problem.\n\nEven a giant Cap to initiate writeout on poweroff would likely be\nplenty since they only pull 150mW or so.\n",
"msg_date": "Fri, 9 Apr 2010 13:12:13 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware vs. MegaRAID"
}
] |
[
{
"msg_contents": "postgres 8.3.5 on RHEL4 update 6\n\nThis query starts executing at 18:41:\n\ncemdb=> select query_start,current_query from pg_stat_activity where \nprocpid=10022; query_start | \n \n current_query\n-------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n 2010-03-30 18:41:11.685261-07 | select b.ts_id from \nts_stats_tranunit_user_daily b, ts_stats_tranunit_user_interval c where \nb.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nb.ts_tranunit_id = c.ts_tranunit_id and b.ts_user_incarnation_id = \nc.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and \nc.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and \nb.ts_interval_start_time < $4\n(1 row)\n\nabout 5 mins later, I, suspecting problems, do (the values are the same \nas for $1 et al above; EXPLAIN was done on purpose to keep stats \n[hopefully] the same as when pid 10022 started; there are 80,000 rows in \neach of the 2 tables at the time of this EXPLAIN and when 10022 started):\n\ncemdb=> explain select b.ts_id from ts_stats_tranunit_user_daily b, \nts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2010-3-29 01:00' and \nc.ts_interval_start_time < '2010-3-29 02:00' and \nb.ts_interval_start_time >= '2010-3-29' and b.ts_interval_start_time < \n'2010-3-30';\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=33574.89..34369.38 rows=25207 width=8)\n Merge Cond: ((b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id) AND (b.ts_tranunit_id = c.ts_tranunit_id) \nAND (b.ts_user_incarnation_id = c.ts_user_incarnation_id))\n -> Sort (cost=13756.68..13854.96 rows=78623 width=32)\n Sort Key: b.ts_transet_incarnation_id, b.ts_tranunit_id, \nb.ts_user_incarnation_id\n -> Index Scan using ts_stats_tranunit_user_daily_starttime on \nts_stats_tranunit_user_daily b (cost=0.00..10560.13 rows=78623 width=32)\n Index Cond: ((ts_interval_start_time >= '2010-03-29 \n00:00:00-07'::timestamp with time zone) AND (ts_interval_start_time < \n'2010-03-30 00:00:00-07'::timestamp with time zone))\n -> Sort (cost=19818.21..19959.72 rows=113207 width=24)\n Sort Key: c.ts_transet_incarnation_id, c.ts_tranunit_id, \nc.ts_user_incarnation_id\n -> Index Scan using ts_stats_tranunit_user_interval_starttime \non ts_stats_tranunit_user_interval c (cost=0.00..15066.74 rows=113207 \nwidth=24)\n Index Cond: ((ts_interval_start_time >= '2010-03-29 \n01:00:00-07'::timestamp with time zone) AND (ts_interval_start_time < \n'2010-03-29 02:00:00-07'::timestamp with time zone))\n(10 rows)\n\ncemdb=> \\q\n\nI then run the query manually:\n\n[root@rdl64xeoserv01 log]# time PGPASSWORD=quality psql -U admin -d \ncemdb -c \"select b.ts_id from ts_stats_tranunit_user_daily b, \nts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2010-3-29 01:00' and \nc.ts_interval_start_time < '2010-3-29 02:00' and \nb.ts_interval_start_time >= '2010-3-29' and b.ts_interval_start_time < \n'2010-3-30'\" > /tmp/select.txt 2>&1\n\nreal 0m0.813s\nuser 0m0.116s\nsys 0m0.013s\n\nI let process 10022 run for an hour. an strace shows lots of I/O:\n\n[root@rdl64xeoserv01 log]# strace -p 10022\nread(18, \"\\214\\2\\0\\0\\374<\\200#\\1\\0\\0\\0<\\0P\\3\\0 \\4 \\0\\0\\0\\0\\320\\234\"..., \n8192) = 8192\nsemop(73007122, 0xbfe0fc20, 1) = 0\n_llseek(18, 538451968, [538451968], SEEK_SET) = 0\nread(18, \"\\214\\2\\0\\0\\274\\347\\t#\\1\\0\\0\\0<\\0P\\3\\0 \\4 \\0\\0\\0\\0\\320\\234\"..., \n8192) = 8192\n_llseek(18, 535928832, [535928832], SEEK_SET) = 0\nread(18, \"\\214\\2\\0\\0\\310\\300\\226\\\"\\1\\0\\0\\0<\\0P\\3\\0 \\4 \\0\\0\\0\\0\\320\"..., \n8192) = 8192\n_llseek(18, 532398080, [532398080], SEEK_SET) = 0\n\n<many more similar lines>\n\nI then kill 10022 and the application retries the same query:\n\n[10022-cemdb-admin-2010-03-30 19:02:37.460 PDT]FATAL: terminating \nconnection due to administrator command\n[10022-cemdb-admin-2010-03-30 19:02:37.460 PDT]STATEMENT: select \nb.ts_id from ts_stats_tranunit_user_daily b, \nts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= $1 and c.ts_interval_start_time < $2 and \nb.ts_interval_start_time >= $3 and b.ts_interval_start_time < $4\n\n[10820-cemdb-admin-2010-03-30 19:02:40.363 PDT]LOG: duration: 1096.598 \nms execute <unnamed>: select b.ts_id from ts_stats_tranunit_user_daily \nb, ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= $1 and c.ts_interval_start_time < $2 and \nb.ts_interval_start_time >= $3 and b.ts_interval_start_time < $4\n\nIdeas on why the big difference in execution times of the same query and \nhow to avoid same are solicited. I tend to doubt, but don't know how to \nprove, that the query stats for these 2 tables were updated between the \nstart of pid 10022 and when I did the EXPLAIN.\n\nThanks,\nBrian\n",
"msg_date": "Tue, 30 Mar 2010 21:11:36 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "query has huge variance in execution times"
},
{
"msg_contents": "On Wed, Mar 31, 2010 at 12:11 AM, Brian Cox <[email protected]> wrote:\n\n>\n> 2010-03-30 18:41:11.685261-07 | select b.ts_id from\n> ts_stats_tranunit_user_daily b, ts_stats_tranunit_user_interval c where\n> b.ts_transet_incarnation_id = c.ts_transet_incarnation_id and\n> b.ts_tranunit_id = c.ts_tranunit_id and b.ts_user_incarnation_id =\n> c.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and\n> c.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and\n> b.ts_interval_start_time < $4\n> (1 row)\n>\n> about 5 mins later, I, suspecting problems, do (the values are the same as\n> for $1 et al above; EXPLAIN was done on purpose to keep stats [hopefully]\n> the same as when pid 10022 started; there are 80,000 rows in each of the 2\n> tables at the time of this EXPLAIN and when 10022 started):\n>\n> cemdb=> explain select b.ts_id from ts_stats_tranunit_user_daily b,\n> ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id =\n> c.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and\n> b.ts_user_incarnation_id = c.ts_user_incarnation_id and\n> c.ts_interval_start_time >= '2010-3-29 01:00' and c.ts_interval_start_time <\n> '2010-3-29 02:00' and b.ts_interval_start_time >= '2010-3-29' and\n> b.ts_interval_start_time < '2010-3-30';\n>\n>\nThese won't necessarily get the same plan. If you want to see what plan the\nprepared query is getting, you'll need to prepare it (\"prepare foo as\n<query>\") and then explain *that* via \"explain execute foo\".\n\nThe prepared version likely has a much more generic plan, whereas the\nregular query gets optimized for the actual values provided.\n\n\n-- \n- David T. Wilson\[email protected]\n\nOn Wed, Mar 31, 2010 at 12:11 AM, Brian Cox <[email protected]> wrote:\n\n 2010-03-30 18:41:11.685261-07 | select b.ts_id from ts_stats_tranunit_user_daily b, ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = c.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and b.ts_user_incarnation_id = c.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and c.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and b.ts_interval_start_time < $4\n\n(1 row)\n\nabout 5 mins later, I, suspecting problems, do (the values are the same as for $1 et al above; EXPLAIN was done on purpose to keep stats [hopefully] the same as when pid 10022 started; there are 80,000 rows in each of the 2 tables at the time of this EXPLAIN and when 10022 started):\n\ncemdb=> explain select b.ts_id from ts_stats_tranunit_user_daily b, ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = c.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and b.ts_user_incarnation_id = c.ts_user_incarnation_id and c.ts_interval_start_time >= '2010-3-29 01:00' and c.ts_interval_start_time < '2010-3-29 02:00' and b.ts_interval_start_time >= '2010-3-29' and b.ts_interval_start_time < '2010-3-30';\nThese won't necessarily get the same plan. If you want to see what plan the prepared query is getting, you'll need to prepare it (\"prepare foo as <query>\") and then explain *that* via \"explain execute foo\". \nThe prepared version likely has a much more generic plan, whereas the regular query gets optimized for the actual values provided.-- - David T. [email protected]",
"msg_date": "Wed, 31 Mar 2010 00:37:40 -0400",
"msg_from": "David Wilson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query has huge variance in execution times"
}
] |
[
{
"msg_contents": "Hi, stright to my \"problem\":\n\nexplain\nSELECT * FROM t_route\n\tWHERE t_route.route_type_fk = 1\n\tlimit 4;\n\n\"Limit (cost=0.00..0.88 rows=4 width=2640)\"\n\" -> Seq Scan on t_route (cost=0.00..118115.25 rows=538301 width=2640)\"\n\" Filter: (route_type_fk = 1)\"\n\n\n\nIf I try to select constant 1 from table with two rows, it will be something \nlike this:\n\nexplain\nSELECT * FROM t_route\n\tWHERE t_route.route_type_fk = (SELECT id FROM t_route_type WHERE type = 2)\n\tlimit 4;\n\n\"Limit (cost=1.02..1.91 rows=4 width=2640)\"\n\" InitPlan\"\n\" -> Seq Scan on t_route_type (cost=0.00..1.02 rows=1 width=8)\"\n\" Filter: (\"type\" = 2)\"\n\" -> Seq Scan on t_route (cost=0.00..118115.25 rows=535090 width=2640)\"\n\" Filter: (route_type_fk = $0)\"\n\n\n\nFirst query is done in about milicesonds. Second is longer than 60 seconds. \nt_route is bigger table (~10M rows).\n\nI think that it seq scans whole table. Is it bug? If no, how can I achieve \nthat second select will not take time to end of world...\n\nHave a nice day and thanks for any reply.\n\n-- \nOdborník na všetko je zlý odborník. Ja sa snažím byť výnimkou potvrdzujúcou \npravidlo.\n",
"msg_date": "Wed, 31 Mar 2010 17:46:38 +0200",
"msg_from": "=?utf-8?q?=C4=BDubom=C3=ADr_Varga?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some question"
},
{
"msg_contents": "2010/3/31 Ľubomír Varga <[email protected]>:\n> Hi, stright to my \"problem\":\n> If I try to select constant 1 from table with two rows, it will be something\n> like this:\n>\n> explain\n> SELECT * FROM t_route\n> WHERE t_route.route_type_fk = (SELECT id FROM t_route_type WHERE type = 2)\n> limit 4;\n>\n> \"Limit (cost=1.02..1.91 rows=4 width=2640)\"\n> \" InitPlan\"\n> \" -> Seq Scan on t_route_type (cost=0.00..1.02 rows=1 width=8)\"\n> \" Filter: (\"type\" = 2)\"\n> \" -> Seq Scan on t_route (cost=0.00..118115.25 rows=535090 width=2640)\"\n> \" Filter: (route_type_fk = $0)\"\n>\n\nLooking at this it looks like you're using prepared queries, which\ncan't make as good of a decision as regular queries because the values\nare opaque to the planner.\n\nCan you provide us with the output of explain analyze of that query?\n",
"msg_date": "Tue, 6 Apr 2010 12:45:59 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some question"
},
{
"msg_contents": "*ubomᅵr Varga<[email protected]> wrote:\n \n> Hi, stright to my \"problem\":\n \nPlease show the exact problem query and the results of running it\nwith EXPLAIN ANALYZE, along with the other information suggested\nhere:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Tue, 06 Apr 2010 17:11:48 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some question"
},
{
"msg_contents": "Scott Marlowe wrote:\n> 2010/3/31 Ľubomír Varga <[email protected]>:\n> \n>> Hi, stright to my \"problem\":\n>> If I try to select constant 1 from table with two rows, it will be something\n>> like this:\n>>\n>> explain\n>> SELECT * FROM t_route\n>> WHERE t_route.route_type_fk = (SELECT id FROM t_route_type WHERE type = 2)\n>> limit 4;\n>>\n>> \"Limit (cost=1.02..1.91 rows=4 width=2640)\"\n>> \" InitPlan\"\n>> \" -> Seq Scan on t_route_type (cost=0.00..1.02 rows=1 width=8)\"\n>> \" Filter: (\"type\" = 2)\"\n>> \" -> Seq Scan on t_route (cost=0.00..118115.25 rows=535090 width=2640)\"\n>> \" Filter: (route_type_fk = $0)\"\n>>\n>> \n>\n> Looking at this it looks like you're using prepared queries, which\n> can't make as good of a decision as regular queries because the values\n> are opaque to the planner.\n>\n> Can you provide us with the output of explain analyze of that query?\n> \nISTM that the initplan 'outputs' id as $0, so it is not a prepared \nquery. Maybe EXPLAIN VERBOSE ANALYZE of the query reveals that better. \nBut both plans show seqscans of the large table, so it is surprising \nthat the performance is different, if the filter expression uses the \nsame values. Are you sure the output SELECT id FROM t_route_type WHERE \ntype = 2 is equal to 1?\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Wed, 07 Apr 2010 09:18:52 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some question"
},
{
"msg_contents": "Hi, here are they:\n\n----------------------------------------------------\nselect * from t_route_type;\n\nID;description;type\n1;\"stojim\";0\n2;\"idem\";1\n\n\n\n----------------------------------------------------\nexplain analyze\nSELECT * FROM t_route\n\tWHERE t_route.route_type_fk = 1\n\tlimit 4;\n\n\"Limit (cost=0.00..0.88 rows=4 width=2640) (actual time=23.352..23.360 rows=4 \nloops=1)\"\n\" -> Seq Scan on t_route (cost=0.00..120497.00 rows=549155 width=2640) \n(actual time=23.350..23.354 rows=4 loops=1)\"\n\" Filter: (route_type_fk = 1)\"\n\"Total runtime: 23.404 ms\"\n\n\n\n----------------------------------------------------\nexplain analyze\nSELECT * FROM t_route\n\tWHERE t_route.route_type_fk = (SELECT id FROM t_route_type WHERE type = 2)\n\tlimit 4;\n\n\"Limit (cost=1.02..1.91 rows=4 width=2640) (actual \ntime=267243.019..267243.019 rows=0 loops=1)\"\n\" InitPlan\"\n\" -> Seq Scan on t_route_type (cost=0.00..1.02 rows=1 width=8) (actual \ntime=0.006..0.006 rows=0 loops=1)\"\n\" Filter: (\"type\" = 2)\"\n\" -> Seq Scan on t_route (cost=0.00..120498.12 rows=545885 width=2640) \n(actual time=267243.017..267243.017 rows=0 loops=1)\"\n\" Filter: (route_type_fk = $0)\"\n\"Total runtime: 267243.089 ms\"\n\n\n\n----------------------------------------------------\nexplain analyze\nSELECT * FROM t_route, t_route_type\n\tWHERE t_route.route_type_fk = t_route_type.id AND\n\t\ttype = 2\n\tlimit 4;\n\n\"Limit (cost=0.00..0.96 rows=4 width=2661) (actual time=0.013..0.013 rows=0 \nloops=1)\"\n\" -> Nested Loop (cost=0.00..131415.62 rows=545880 width=2661) (actual \ntime=0.012..0.012 rows=0 loops=1)\"\n\" Join Filter: (t_route.route_type_fk = t_route_type.id)\"\n\" -> Seq Scan on t_route_type (cost=0.00..1.02 rows=1 width=21) \n(actual time=0.011..0.011 rows=0 loops=1)\"\n\" Filter: (\"type\" = 2)\"\n\" -> Seq Scan on t_route (cost=0.00..117767.60 rows=1091760 \nwidth=2640) (never executed)\"\n\"Total runtime: 0.054 ms\"\n\n\n\nSo I found solution. It is third select, where is used join instead of inner \nselect to get ID for some constant from t_route_type.\n\nt_route is table with routes taken by some car. It have same strings as \ncolumns and one geometry column with line of travelled path. Type of route is \nin t_route_type and it could be \"travelling\" and \"standing\" type. In my \nselect I want to select some routes which are type \"travelling\" (type = 1, id \n= 2). It is only sample select.\n\nPlease explain me why second query had taken so long to finish.\n\nHave a nice day.\n\n\nOn Wednesday 07 April 2010 00:11:48 Kevin Grittner wrote:\n> *ubomír Varga<[email protected]> wrote:\n> > Hi, stright to my \"problem\":\n>\n> Please show the exact problem query and the results of running it\n> with EXPLAIN ANALYZE, along with the other information suggested\n> here:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> -Kevin\n\n-- \nOdborník na všetko je zlý odborník. Ja sa snažím byť výnimkou potvrdzujúcou \npravidlo.\n",
"msg_date": "Sat, 10 Apr 2010 23:42:21 +0200",
"msg_from": "=?utf-8?q?=C4=BDubom=C3=ADr_Varga?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some question"
}
] |
[
{
"msg_contents": "On 03/31/2010 12:37 AM, David Wilson [[email protected]] wrote:\n> These won't necessarily get the same plan. If you want to see what plan\n> the prepared query is getting, you'll need to prepare it (\"prepare foo\n> as <query>\") and then explain *that* via \"explain execute foo\".\n>\n> The prepared version likely has a much more generic plan, whereas the\n> regular query gets optimized for the actual values provided.\n\nI didn't know this. Thanks. The plans are indeed different:\n\ncemdb=# prepare sq as select b.ts_id from ts_stats_tranunit_user_daily \nb, ts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= $1 and c.ts_interval_start_time < $2 and \nb.ts_interval_start_time >= $3 and b.ts_interval_start_time < $4;\ncemdb=# explain execute sq('2010-3-29 01:00', '2010-3-29 02:00', \n'2010-3-29', '2010-3-30'); \n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=7885.37..8085.91 rows=30 width=8)\n Merge Cond: ((b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id) AND (b.ts_tranunit_id = c.ts_tranunit_id) \nAND (b.ts_user_incarnation_id = c.ts_user_incarnation_id))\n -> Sort (cost=1711.82..1716.81 rows=3994 width=32)\n Sort Key: b.ts_transet_incarnation_id, b.ts_tranunit_id, \nb.ts_user_incarnation_id\n -> Index Scan using ts_stats_tranunit_user_daily_starttime on \nts_stats_tranunit_user_daily b (cost=0.00..1592.36 rows=3994 width=32)\n Index Cond: ((ts_interval_start_time >= $3) AND \n(ts_interval_start_time < $4))\n\ncemdb=# explain select b.ts_id from ts_stats_tranunit_user_daily b, \nts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2010-3-29 01:00' and \nc.ts_interval_start_time < '2010-3-29 02:00' and \nb.ts_interval_start_time >= '2010-3-29' and b.ts_interval_start_time < \n'2010-3-30';\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=291965.90..335021.46 rows=13146 width=8)\n Hash Cond: ((c.ts_transet_incarnation_id = \nb.ts_transet_incarnation_id) AND (c.ts_tranunit_id = b.ts_tranunit_id) \nAND (c.ts_user_incarnation_id = b.ts_user_incarnation_id))\n -> Index Scan using ts_stats_tranunit_user_interval_starttime on \nts_stats_tranunit_user_interval c (cost=0.00..11783.36 rows=88529 width=24)\n Index Cond: ((ts_interval_start_time >= '2010-03-29 \n01:00:00-07'::timestamp with time zone) AND (ts_interval_start_time < \n'2010-03-29 02:00:00-07'::timestamp with time zone))\n -> Hash (cost=285681.32..285681.32 rows=718238 width=32)\n -> Index Scan using ts_stats_tranunit_user_daily_starttime on \nts_stats_tranunit_user_daily b (cost=0.00..285681.32 rows=718238 width=32)\n Index Cond: ((ts_interval_start_time >= '2010-03-29 \n00:00:00-07'::timestamp with time zone) AND (ts_interval_start_time < \n'2010-03-30 00:00:00-07'::timestamp with time zone))\n(7 rows)\n\n -> Sort (cost=6173.55..6218.65 rows=36085 width=24)\n Sort Key: c.ts_transet_incarnation_id, c.ts_tranunit_id, \nc.ts_user_incarnation_id\n -> Index Scan using ts_stats_tranunit_user_interval_starttime \non ts_stats_tranunit_user_interval c (cost=0.00..4807.81 rows=36085 \nwidth=24)\n Index Cond: ((ts_interval_start_time >= $1) AND \n(ts_interval_start_time < $2))\n(10 rows)\n\nI notice that the row estimates are substantially different; this is due \nto the lack of actual values?\n\nBut, this prepared query runs in ~4 secs:\n\n[root@rdl64xeoserv01 log]# cat /tmp/select.sql\nprepare sq as select b.ts_id from ts_stats_tranunit_user_daily b, \nts_stats_tranunit_user_interval c where b.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_tranunit_id = c.ts_tranunit_id and \nb.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= $1 and c.ts_interval_start_time < $2 and \nb.ts_interval_start_time >= $3 and b.ts_interval_start_time < $4;\nexecute sq('2010-3-29 01:00', '2010-3-29 02:00', '2010-3-29', '2010-3-30\n\n[root@rdl64xeoserv01 log]# time PGPASSWORD=quality psql -U postgres -d \ncemdb -f /tmp/select.sql > /tmp/select1.txt 2>&1\nreal 0m4.131s\nuser 0m0.119s\nsys 0m0.007s\n\nso the question still remains: why did it take > 20 mins? To see if it \nwas due to autovacuum running ANALYZE, I turned off autovacuum, created \na table using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, \nadded the index on ts_interval_start_time and ran the prepared query \nwith temp, but the query completed in a few secs.\n\nBrian\n\n",
"msg_date": "Wed, 31 Mar 2010 11:10:53 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query has huge variance in execution times"
},
{
"msg_contents": "On Wed, Mar 31, 2010 at 2:10 PM, Brian Cox <[email protected]> wrote:\n\n>\n>\n> so the question still remains: why did it take > 20 mins? To see if it was\n> due to autovacuum running ANALYZE, I turned off autovacuum, created a table\n> using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, added the index\n> on ts_interval_start_time and ran the prepared query with temp, but the\n> query completed in a few secs.\n>\n> It's possible that statistics were updated between the >20 minute run and\nyour most recent prepared query test. In fact, comparing the plans between\nyour two emails, it's quite likely, as even the non-prepared versions are\nnot producing the same plan or the same estimates; it's therefore possible\nthat your problem has already corrected itself if you're unable to duplicate\nthe 20 minute behaviour at this point.\n\nTaking a look at the statistics accuracy with an explain analyze might still\nbe informative, however.\n\n-- \n- David T. Wilson\[email protected]\n\nOn Wed, Mar 31, 2010 at 2:10 PM, Brian Cox <[email protected]> wrote:\n\n\nso the question still remains: why did it take > 20 mins? To see if it was due to autovacuum running ANALYZE, I turned off autovacuum, created a table using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, added the index on ts_interval_start_time and ran the prepared query with temp, but the query completed in a few secs.\n\nIt's possible that statistics were updated between the >20 minute run and your most recent prepared query test. In fact, comparing the plans between your two emails, it's quite likely, as even the non-prepared versions are not producing the same plan or the same estimates; it's therefore possible that your problem has already corrected itself if you're unable to duplicate the 20 minute behaviour at this point.\nTaking a look at the statistics accuracy with an explain analyze might still be informative, however.-- - David T. [email protected]",
"msg_date": "Wed, 31 Mar 2010 14:44:44 -0400",
"msg_from": "David Wilson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query has huge variance in execution times"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a table with 40GB size, it has few indexes on it. When i try to\nREINDEX on the table, its take a long time. I tried increasing the\nmaintenance_work_mem, but still i havnt find a satisfying result.\n\nQuestions\n=======\n1. What are the parameters will effect, when issuing the REINDEX command\n2. Best possible way to increase the spead of the REINDEX\n\nThanks in Advance\n\nRegards\nRaghavendra\n\nHi All,\n \nI have a table with 40GB size, it has few indexes on it. When i try to REINDEX on the table, its take a long time. I tried increasing the maintenance_work_mem, but still i havnt find a satisfying result. \n \nQuestions\n=======\n1. What are the parameters will effect, when issuing the REINDEX command\n2. Best possible way to increase the spead of the REINDEX\n \nThanks in Advance\n \nRegards\nRaghavendra",
"msg_date": "Thu, 1 Apr 2010 02:21:26 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to fast the REINDEX"
},
{
"msg_contents": "raghavendra t <[email protected]> wrote:\n \n> I have a table with 40GB size, it has few indexes on it.\n \nWhat does the table look like? What indexes are there?\n \n> When i try to REINDEX on the table,\n \nWhy are you doing that?\n \n> its take a long time.\n \nHow long?\n \n> I tried increasing the maintenance_work_mem, but still i havnt\n> find a satisfying result.\n \nWhat run time are you expecting?\n \n> Questions\n> =======\n> 1. What are the parameters will effect, when issuing the REINDEX\n> command\n> 2. Best possible way to increase the spead of the REINDEX\n \nIt's hard to answer that without more information, like PostgreSQL\nversion and configuration, for starters. See:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nMy best guess is that you can make them instantaneous by not running\nthem. A good VACUUM policy should make such runs unnecessary in\nmost cases -- at least on recent PostgreSQL versions.\n \n-Kevin\n",
"msg_date": "Wed, 31 Mar 2010 16:02:55 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "Hi Kevin,\n\nThank you for the update,\n\n>>What does the table look like? What indexes are there?\nTable has a combination of byteas. Indexes are b-tree and Partial\n\n>>Why are you doing that?\nOur table face lot of updates and deletes in a day, so we prefer reindex to\nupdate the indexes as well overcome with a corrupted index.\n\n>> How long?\nMore than 4 hrs..\n\n>>What run time are you expecting?\nLess than what it is taking at present.\n\n>>It's hard to answer that without more information, like PostgreSQL\n>>version and configuration, for starters. See:\n version\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-44), 32-bit\n(1 row)\n\n>>http://wiki.postgresql.org/wiki/SlowQueryQuestions\nExpected the performance question..\n\nRegards\nRaghavendra\n\nOn Thu, Apr 1, 2010 at 2:32 AM, Kevin Grittner\n<[email protected]>wrote:\n\n> raghavendra t <[email protected]> wrote:\n>\n> > I have a table with 40GB size, it has few indexes on it.\n>\n> What does the table look like? What indexes are there?\n>\n> > When i try to REINDEX on the table,\n>\n> Why are you doing that?\n>\n> > its take a long time.\n>\n> How long?\n>\n> > I tried increasing the maintenance_work_mem, but still i havnt\n> > find a satisfying result.\n>\n> What run time are you expecting?\n>\n> > Questions\n> > =======\n> > 1. What are the parameters will effect, when issuing the REINDEX\n> > command\n> > 2. Best possible way to increase the spead of the REINDEX\n>\n> It's hard to answer that without more information, like PostgreSQL\n> version and configuration, for starters. See:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> My best guess is that you can make them instantaneous by not running\n> them. A good VACUUM policy should make such runs unnecessary in\n> most cases -- at least on recent PostgreSQL versions.\n>\n> -Kevin\n>\n\nHi Kevin,\n \nThank you for the update,\n \n>>What does the table look like? What indexes are there?\nTable has a combination of byteas. Indexes are b-tree and Partial\n \n>>Why are you doing that?\nOur table face lot of updates and deletes in a day, so we prefer reindex to update the indexes as well overcome with a corrupted index.\n \n>> How long?\nMore than 4 hrs..\n \n>>What run time are you expecting?\nLess than what it is taking at present.\n \n>>It's hard to answer that without more information, like PostgreSQL>>version and configuration, for starters. See:\n version------------------------------------------------------------------------------------------------------------ PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44), 32-bit\n(1 row)\n>>http://wiki.postgresql.org/wiki/SlowQueryQuestions\nExpected the performance question..\n \nRegards\nRaghavendra\nOn Thu, Apr 1, 2010 at 2:32 AM, Kevin Grittner <[email protected]> wrote:\n\nraghavendra t <[email protected]> wrote:> I have a table with 40GB size, it has few indexes on it.What does the table look like? What indexes are there?\n> When i try to REINDEX on the table,Why are you doing that?\n> its take a long time.How long?\n> I tried increasing the maintenance_work_mem, but still i havnt> find a satisfying result.What run time are you expecting?\n> Questions> =======> 1. What are the parameters will effect, when issuing the REINDEX> command> 2. Best possible way to increase the spead of the REINDEX\nIt's hard to answer that without more information, like PostgreSQLversion and configuration, for starters. See:http://wiki.postgresql.org/wiki/SlowQueryQuestions\nMy best guess is that you can make them instantaneous by not runningthem. A good VACUUM policy should make such runs unnecessary inmost cases -- at least on recent PostgreSQL versions.\n-Kevin",
"msg_date": "Thu, 1 Apr 2010 03:03:48 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "raghavendra t <[email protected]> wrote:\n \n> overcome with a corrupted index.\n \nIf this is a one-time fix for a corrupted index, did you look at\nCREATE INDEX CONCURRENTLY? You could avoid any down time while you\nfix things up.\n \nhttp://www.postgresql.org/docs/8.4/interactive/sql-createindex.html\n \n-Kevin\n",
"msg_date": "Wed, 31 Mar 2010 16:40:04 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": ">\n> If this is a one-time fix for a corrupted index, did you look at\n> CREATE INDEX CONCURRENTLY? You could avoid any down time while you\n> fix things up.\n>\nUsing CREATE INDEX CONCURRENTLY will avoid the exclusive locks on the table,\nbut my question is, how to get a performance on the existing indexes. You\nmean to say , drop the existing indexes and create the index with\nCONCURRENTLY. Does this give the performance back.\n\nRegards\nRaghavendra\n\n\nOn Thu, Apr 1, 2010 at 3:10 AM, Kevin Grittner\n<[email protected]>wrote:\n\n> raghavendra t <[email protected]> wrote:\n>\n> > overcome with a corrupted index.\n>\n> If this is a one-time fix for a corrupted index, did you look at\n> CREATE INDEX CONCURRENTLY? You could avoid any down time while you\n> fix things up.\n>\n> http://www.postgresql.org/docs/8.4/interactive/sql-createindex.html\n>\n> -Kevin\n>\n\nIf this is a one-time fix for a corrupted index, did you look atCREATE INDEX CONCURRENTLY? You could avoid any down time while you\nfix things up.\nUsing CREATE INDEX CONCURRENTLY will avoid the exclusive locks on the table, but my question is, how to get a performance on the existing indexes. You mean to say , drop the existing indexes and create the index with CONCURRENTLY. Does this give the performance back.\n \nRegards\nRaghavendra\n \n \nOn Thu, Apr 1, 2010 at 3:10 AM, Kevin Grittner <[email protected]> wrote:\n\nraghavendra t <[email protected]> wrote:\n> overcome with a corrupted index.If this is a one-time fix for a corrupted index, did you look atCREATE INDEX CONCURRENTLY? You could avoid any down time while youfix things up.\nhttp://www.postgresql.org/docs/8.4/interactive/sql-createindex.html-Kevin",
"msg_date": "Thu, 1 Apr 2010 03:17:22 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "raghavendra t <[email protected]> wrote:\n \n> my question is, how to get a performance on the existing indexes.\n> You mean to say , drop the existing indexes and create the index\n> with CONCURRENTLY. Does this give the performance back.\n \nYou would normally want to create first and then drop the old ones,\nunless the old ones are hopelessly corrupted. Since you still\nhaven't given me any information to suggest you need to reindex\nexcept for the mention of corruption, or any information to help\nidentify where the performance bottleneck is, I can't see any other\nimprovements to suggest at this point.\n \n-Kevin\n",
"msg_date": "Wed, 31 Mar 2010 16:51:55 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "Thank you for the suggestion.\n\nOn Thu, Apr 1, 2010 at 3:21 AM, Kevin Grittner\n<[email protected]>wrote:\n\n> raghavendra t <[email protected]> wrote:\n>\n> > my question is, how to get a performance on the existing indexes.\n> > You mean to say , drop the existing indexes and create the index\n> > with CONCURRENTLY. Does this give the performance back.\n>\n> You would normally want to create first and then drop the old ones,\n> unless the old ones are hopelessly corrupted. Since you still\n> haven't given me any information to suggest you need to reindex\n> except for the mention of corruption, or any information to help\n> identify where the performance bottleneck is, I can't see any other\n> improvements to suggest at this point.\n>\n> -Kevin\n>\n\nThank you for the suggestion.\nOn Thu, Apr 1, 2010 at 3:21 AM, Kevin Grittner <[email protected]> wrote:\n\nraghavendra t <[email protected]> wrote:\n> my question is, how to get a performance on the existing indexes.> You mean to say , drop the existing indexes and create the index> with CONCURRENTLY. Does this give the performance back.\nYou would normally want to create first and then drop the old ones,unless the old ones are hopelessly corrupted. Since you stillhaven't given me any information to suggest you need to reindexexcept for the mention of corruption, or any information to help\nidentify where the performance bottleneck is, I can't see any otherimprovements to suggest at this point.-Kevin",
"msg_date": "Thu, 1 Apr 2010 03:30:52 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "raghavendra t <[email protected]> wrote:\n> Thank you for the suggestion.\n \nI'm sorry I couldn't come up with more, but what you've provided so\nfar is roughly equivalent to me telling you that it takes over four\nhours to travel to see my Uncle Jim, and then asking you how I can\nfind out how he's doing in less time than that. There's just not\nmuch to go on. :-(\n \nIf you proceed with the course suggested in the URL I referenced,\npeople on the list have a chance to be more helpful to you.\n \n-Kevin\n",
"msg_date": "Wed, 31 Mar 2010 17:10:52 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t\n<[email protected]> wrote:\n>\n>>>Why are you doing that?\n> Our table face lot of updates and deletes in a day, so we prefer reindex to\n> update the indexes as well overcome with a corrupted index.\n>\n\ndo you have a corrupted index? if not, there is nothing to do...\nREINDEX is not a mantenance task on postgres\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n",
"msg_date": "Wed, 31 Mar 2010 18:25:01 -0400",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": ">\n> I'm sorry I couldn't come up with more, but what you've provided so\n> far is roughly equivalent to me telling you that it takes over four\n> hours to travel to see my Uncle Jim, and then asking you how I can\n> find out how he's doing in less time than that. There's just not\n> much to go on. :-(\n>\n> If you proceed with the course suggested in the URL I referenced,\n> people on the list have a chance to be more helpful to you.\n>\nInstead of looking into the priority of the question or where it has to be\nposted, it would be appreciated to keep a discussion to the point\nmentioned. Truely this question belong to some other place as you have\nmentioned in the URL. But answer for Q1 might be expected alteast. Hope i\ncould get the information from the other Thread in other catagory.\n\nThank you\n\nRegards\nRaghavendra\n\n\nOn Thu, Apr 1, 2010 at 3:40 AM, Kevin Grittner\n<[email protected]>wrote:\n\n> raghavendra t <[email protected]> wrote:\n> > Thank you for the suggestion.\n>\n> I'm sorry I couldn't come up with more, but what you've provided so\n> far is roughly equivalent to me telling you that it takes over four\n> hours to travel to see my Uncle Jim, and then asking you how I can\n> find out how he's doing in less time than that. There's just not\n> much to go on. :-(\n>\n> If you proceed with the course suggested in the URL I referenced,\n> people on the list have a chance to be more helpful to you.\n>\n> -Kevin\n>\n\nI'm sorry I couldn't come up with more, but what you've provided sofar is roughly equivalent to me telling you that it takes over four\nhours to travel to see my Uncle Jim, and then asking you how I canfind out how he's doing in less time than that. There's just notmuch to go on. :-(If you proceed with the course suggested in the URL I referenced,\npeople on the list have a chance to be more helpful to you.\nInstead of looking into the priority of the question or where it has to be posted, it would be appreciated to keep a discussion to the point mentioned. Truely this question belong to some other place as you have mentioned in the URL. But answer for Q1 might be expected alteast. Hope i could get the information from the other Thread in other catagory.\n \nThank you\n \nRegards\nRaghavendra\n \n \nOn Thu, Apr 1, 2010 at 3:40 AM, Kevin Grittner <[email protected]> wrote:\n\nraghavendra t <[email protected]> wrote:\n> Thank you for the suggestion.I'm sorry I couldn't come up with more, but what you've provided sofar is roughly equivalent to me telling you that it takes over fourhours to travel to see my Uncle Jim, and then asking you how I can\nfind out how he's doing in less time than that. There's just notmuch to go on. :-(If you proceed with the course suggested in the URL I referenced,people on the list have a chance to be more helpful to you.\n-Kevin",
"msg_date": "Thu, 1 Apr 2010 04:27:13 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "Jaime Casanova wrote:\n> On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t\n> <[email protected]> wrote:\n>>>> Why are you doing that?\n>> Our table face lot of updates and deletes in a day, so we prefer reindex to\n>> update the indexes as well overcome with a corrupted index.\n>>\n> \n> do you have a corrupted index? if not, there is nothing to do...\n> REINDEX is not a mantenance task on postgres\n\nActually, if your free_space_map (pre 8.4) isn't up to keeping track of\nbloat, or autovac isn't running enough, you're likely to get bloat of\nindexes as well as tables that may need VACUUM FULL + REINDEX to\nproperly clean up.\n\nIt's probably better to fix your fsm/autovac settings then CLUSTER the\ntable so it doesn't happen again, though.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 01 Apr 2010 11:11:36 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "On Thu, 2010-04-01 at 04:27 +0530, raghavendra t wrote:\n> I'm sorry I couldn't come up with more, but what you've\n> provided so\n> far is roughly equivalent to me telling you that it takes over\n> four\n> hours to travel to see my Uncle Jim, and then asking you how I\n> can\n> find out how he's doing in less time than that. There's just\n> not\n> much to go on. :-(\n> \n> If you proceed with the course suggested in the URL I\n> referenced,\n> people on the list have a chance to be more helpful to you.\n> Instead of looking into the priority of the question or where it has\n> to be posted, it would be appreciated to keep a discussion to the\n> point mentioned. Truely this question belong to some other place as\n> you have mentioned in the URL. But answer for Q1 might be expected\n> alteast. \n\nOk, here is my answer to your Q1:\n\nQ1. What are the parameters will effect, when issuing the REINDEX\ncommand\n\nA: Assuming you meant what parameters affect performance of REINDEX\ncommand.\n\nMost parameters that affect general performance affect also REINDEX\ncommand.\n\nSome that affect more are:\n\n* amount of RAM in your server - the most important thing\n\n* speed of disk subsystem - next most important in case not all of\nactive data fits in memory \n\nTunables\n\n* maintenance_work_mem - affects how much of sorting can be done in\nmemory, if you can afford to have maintenance_work_mem > largest index\nsize then sorting for index creation can be done in RAM only and is\nsignificantly faster than when doing tape sort with intermediate files\non disks.\n\n* wal_buffers - the bigger the better here, but competes with how big\nyou can make maintenance_work_mem . If more of heap and created indexes\ncan be kept in shared memory, everything runs faster.\n\n* checkpoint_segments - affects how often whole wal_buffers is synced to\ndisk, if done too often then wastes lot of disk bandwidth for no good\nreason.\n\n* other chekpoint_* - tune to avoid excessive checkpointing.\n\n> Hope i could get the information from the other Thread in other\n> catagory.\n\nNah, actually [PERFORM] is the right place to ask. \n\nJust most people got the impression that you may be doing unnecessary\nREINDEXing, and the best way to speed up unneeded things is not to do\nthem ;)\n\n> Thank you\n> \n> Regards\n> Raghavendra\n\n\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Thu, 01 Apr 2010 13:54:33 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "On 03/31/2010 11:11 PM, Craig Ringer wrote:\n> Jaime Casanova wrote:\n>> On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t\n>> <[email protected]> wrote:\n>>>>> Why are you doing that?\n>>> Our table face lot of updates and deletes in a day, so we prefer reindex to\n>>> update the indexes as well overcome with a corrupted index.\n>>>\n>>\n>> do you have a corrupted index? if not, there is nothing to do...\n>> REINDEX is not a mantenance task on postgres\n>\n> Actually, if your free_space_map (pre 8.4) isn't up to keeping track of\n> bloat, or autovac isn't running enough, you're likely to get bloat of\n> indexes as well as tables that may need VACUUM FULL + REINDEX to\n> properly clean up.\n>\n> It's probably better to fix your fsm/autovac settings then CLUSTER the\n> table so it doesn't happen again, though.\n>\n> --\n> Craig Ringer\n>\nSo am I to understand I don't need to do daily reindexing as a maintenance\nmeasure with 8.3.7 on FreeBSD.\n\n-- \nStephen Clark\nNetWolves\nSr. Software Engineer III\nPhone: 813-579-3200\nFax: 813-882-0209\nEmail: [email protected]\nwww.netwolves.com\n",
"msg_date": "Thu, 01 Apr 2010 07:10:13 -0400",
"msg_from": "Steve Clark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "raghavendra t wrote:\n> 1. What are the parameters will effect, when issuing the REINDEX command\n> 2. Best possible way to increase the spead of the REINDEX\n\nIf you haven't done the usual general tuning on your server, that might \nhelp. http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server is \nan introduction. If increasing maintainance_work_mem alone doesn't \nhelp, I'd try increases to checkpoint_segments and then shared_buffers \nnext. Those are the three parameters mostly likely to speed that up.\n\nThe things already suggested in this thread are still valid though. \nNeeding to REINDEX suggests there may be a problem with your database \nbetter addressed by running autovacuum more regularly.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 01 Apr 2010 07:36:54 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "\n> So am I to understand I don't need to do daily reindexing as a \n> maintenance measure with 8.3.7 on FreeBSD.\n\nSometimes it's better to have indexes with some space in them so every \ninsert doesn't hit a full index page and triggers a page split to make \nsome space.\nOf course if the index is 90% holes you got a problem ;)\n\n\n",
"msg_date": "Thu, 01 Apr 2010 14:25:53 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "Hi All,\n\nSystem Config\n---------------------\nCPU - Intel® Xenon® CPU\nCPU Speed - 3.16 GHz\nServer Model - Sun Fire X4150\nRAM-Size - 16GB\n\n> Steve:\n\nSo am I to understand I don't need to do daily reindexing as a maintenance\n> measure with 8.3.7 on FreeBSD.\n\n\nMy question is something like Steve's, why we should not do reindexing as\nour maintenance task. I was doing reindex only to get\na best fit and not fall short of 90% hole, bcoz my table has lot of updates\nand deletes. We also has the weekly maintance of VACUUM, but still reindex\ntakes lot of time.\n\nPresent Paramters settings\n----------------------------------------\nmaintainence_work_mem - 1GB\nCheckpoint_segment and Wal_buffers are default values.\n\n\nKevin, Pierre, Greg, Steve, Hannu, Jorge ----- Thank you for your\nwonderfull support and giving me the correct picture on REINDEX on this\nthread. I appoligies if i couldnt have shared the proper information in\nresolving my issue. Is the above information provided by me will help out in\ntuning better.\n\nRegards\nRaghavendra\n\n\n\n\nOn Thu, Apr 1, 2010 at 5:55 PM, Pierre C <[email protected]> wrote:\n\n>\n> So am I to understand I don't need to do daily reindexing as a maintenance\n>> measure with 8.3.7 on FreeBSD.\n>>\n>\n> Sometimes it's better to have indexes with some space in them so every\n> insert doesn't hit a full index page and triggers a page split to make some\n> space.\n> Of course if the index is 90% holes you got a problem ;)\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n>\n>\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi All,\n \nSystem Config\n---------------------CPU - Intel® Xenon® CPUCPU Speed - 3.16 GHzServer Model - Sun Fire X4150RAM-Size - 16GB\nSteve:\nSo am I to understand I don't need to do daily reindexing as a maintenance measure with 8.3.7 on FreeBSD.\n \nMy question is something like Steve's, why we should not do reindexing as our maintenance task. I was doing reindex only to get\na best fit and not fall short of 90% hole, bcoz my table has lot of updates and deletes. We also has the weekly maintance of VACUUM, but still reindex takes lot of time. \n \nPresent Paramters settings\n----------------------------------------\nmaintainence_work_mem - 1GB\nCheckpoint_segment and Wal_buffers are default values.\n \n \nKevin, Pierre, Greg, Steve, Hannu, Jorge ----- Thank you for your wonderfull support and giving me the correct picture on REINDEX on this thread. I appoligies if i couldnt have shared the proper information in resolving my issue. Is the above information provided by me will help out in tuning better.\n \nRegards\nRaghavendra\n \n \n \n \nOn Thu, Apr 1, 2010 at 5:55 PM, Pierre C <[email protected]> wrote:\n\n\nSo am I to understand I don't need to do daily reindexing as a maintenance measure with 8.3.7 on FreeBSD.\nSometimes it's better to have indexes with some space in them so every insert doesn't hit a full index page and triggers a page split to make some space.Of course if the index is 90% holes you got a problem ;)\n-- Sent via pgsql-performance mailing list ([email protected]) \nTo make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 1 Apr 2010 19:17:15 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "On Thu, 2010-04-01 at 19:17 +0530, raghavendra t wrote:\n> \n> Hi All,\n> \n> System Config\n> ---------------------\n> CPU - Intel® Xenon® CPU\n> CPU Speed - 3.16 GHz\n> Server Model - Sun Fire X4150\n> RAM-Size - 16GB\n> \n> Steve:\n> So am I to understand I don't need to do daily reindexing as a\n> maintenance measure with 8.3.7 on FreeBSD.\n> \n> My question is something like Steve's, why we should not do reindexing\n> as our maintenance task. I was doing reindex only to get\n> a best fit and not fall short of 90% hole, bcoz my table has lot of\n> updates and deletes. We also has the weekly maintance of VACUUM, but\n> still reindex takes lot of time. \n\nThis is your problem. You should enable autovaccuum, let the vacuums\nhappen more frequently, and this problem will go away. You will still\nhave to fix the underlying bloat a last time though.\n \n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Thu, 01 Apr 2010 09:52:55 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "raghavendra t <[email protected]> wrote:\n \n> System Config\n> ---------------------\n> CPU - Intel* Xenon* CPU\n> CPU Speed - 3.16 GHz\n> Server Model - Sun Fire X4150\n> RAM-Size - 16GB\n \nThe disk system matters a lot, too. How many drives do you have in\nwhat RAID configuration(s)?\n \n> My question is something like Steve's, why we should not do\n> reindexing as our maintenance task.\n \nIf your VACUUM policy is good, the REINDEX should not be necessary.\nA good VACUUM policy, in my experience, usually involves setting it\nto VACUUM any table in which 20% or more of the rows have changed\n(with autovacuum_vacuum_threshold set pretty low). Cut those about\nin half for the autovacuum ANALYZE trigger point. You may need to\nuse cost limits to avoid a hit on the production workload when\nautovacuum kicks in. If you do need that, I've found a 10ms naptime\nis adequate for us. Then, try running VACUUM ANALYZE VERBOSE\n*nightly* (again, with cost limits if needed to avoid impact on\nother processes). Capture the output, as it can be used to find\nwhere you have bloat problems. Monitor the last few lines to make\nsure your fsm (free space manager) settings are high enough -- it'll\ntell you if they're not.\n \nIf you do this, you should be able to stop running REINDEX without\nany performance hit. There will be some dead space in the indexes,\nbut this will likely help with performance of UPDATE and INSERT, as\npage splits will happen less frequently, and PostgreSQL won't have\nto constantly be asking the OS for more disk space.\n \n> I was doing reindex only to get a best fit and not fall short of\n> 90% hole, bcoz my table has lot of updates and deletes. We also\n> has the weekly maintance of VACUUM, but still reindex takes lot of\n> time.\n \nVACUUM won't help a lot with REINDEX time, since REINDEX has to read\nthe entire table once per index and build everything up from scratch\nevery time. If you VACUUM often enough, it is kept in good shape as\nyou go.\n \n> Present Paramters settings\n> ----------------------------------------\n> maintainence_work_mem - 1GB\n> Checkpoint_segment and Wal_buffers are default values.\n \nYou will probably benefit from increasing those last two. Is\neverything else at the default? There are a few others which almost\nalways need to be tuned to your run time environment. The defaults\nare designed to allow the server to start and run even on a very\nsmall desktop machine, so that someone's first \"test drive\" isn't\nmarred by problems. When you gear up for production use, you\nnormally need to tune it.\n \n> Kevin, Pierre, Greg, Steve, Hannu, Jorge ----- Thank you for\n> your wonderfull support and giving me the correct picture on\n> REINDEX on this thread. I appoligies if i couldnt have shared the\n> proper information in resolving my issue. Is the above information\n> provided by me will help out in tuning better.\n \nI'm starting to get a better picture of the environment. I really\nthink that if you modify your VACUUM policy you can drop the REINDEX\nand be in much better shape overall. If you provide information on\nyour disk subsystem, show us what your postgresql.conf file looks\nlike (with all comments stripped out), and give us a general idea of\nthe workload, we might be able to suggest some tuning that will help\nyou overall. And you might think about when and how to upgrade --\nautovacuum is something which has been getting better with major\nreleases, and performance in general has been improving markedly.\n \nI hope this helps.\n \n-Kevin\n",
"msg_date": "Thu, 01 Apr 2010 09:19:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
},
{
"msg_contents": "On Thu, Apr 1, 2010 at 9:47 AM, raghavendra t <[email protected]> wrote:\n> and deletes. We also has the weekly maintance of VACUUM, but still reindex\n> takes lot of time.\n\nIf you only VACUUM once a week, *everything* is going to take a lot of time.\n\n...Robert\n",
"msg_date": "Mon, 5 Apr 2010 12:54:29 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to fast the REINDEX"
}
] |
[
{
"msg_contents": "I came across a strange problem when writing a plpgsql function.\n\nWhy won't the query planner realize it would be a lot faster to use the\n\"index_transactions_accountid_currency\" index instead of using the\n\"transactions_pkey\" index in the queries below?\nThe LIMIT 1 part of the query slows it down from 0.07 ms to 1023 ms.\n\nIs this a bug? I'm using version 8.4.1.\n\ndb=# SELECT TransactionID FROM Transactions WHERE AccountID = 108 AND\nCurrency = 'SEK' ORDER BY TransactionID;\n transactionid\n---------------\n 2870130\n 2870164\n 3371529\n 3371545\n 3371565\n(5 rows)\n\ndb=# EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE AccountID\n= 108 AND Currency = 'SEK' ORDER BY TransactionID;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=27106.33..27134.69 rows=11345 width=4) (actual\ntime=0.048..0.049 rows=5 loops=1)\n Sort Key: transactionid\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on transactions (cost=213.39..26342.26 rows=11345\nwidth=4) (actual time=0.033..0.039 rows=5 loops=1)\n Recheck Cond: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n -> Bitmap Index Scan on index_transactions_accountid_currency\n (cost=0.00..210.56 rows=11345 width=0) (actual time=0.027..0.027 rows=5\nloops=1)\n Index Cond: ((accountid = 108) AND (currency =\n'SEK'::bpchar))\n Total runtime: 0.070 ms\n(8 rows)\n\ndb=# SELECT TransactionID FROM Transactions WHERE AccountID = 108 AND\nCurrency = 'SEK' ORDER BY TransactionID LIMIT 1;\n transactionid\n---------------\n 2870130\n(1 row)\n\ndb=# EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE AccountID\n= 108 AND Currency = 'SEK' ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..43.46 rows=1 width=4) (actual time=1023.213..1023.214\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..493029.74 rows=11345 width=4) (actual time=1023.212..1023.212\nrows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1023.244 ms\n(4 rows)\n\ndb=# \\d transactions\n Table \"public.transactions\"\n Column | Type |\n Modifiers\n-------------------------------+--------------------------+-------------------------------------------------------\n transactionid | integer | not null default\nnextval('seqtransactions'::regclass)\n eventid | integer | not null\n ruleid | integer | not null\n accountid | integer | not null\n amount | numeric | not null\n balance | numeric | not null\n currency | character(3) | not null\n recorddate | timestamp with time zone | not null default\nnow()\nIndexes:\n \"transactions_pkey\" PRIMARY KEY, btree (transactionid)\n \"index_transactions_accountid_currency\" btree (accountid, currency)\n \"index_transactions_eventid\" btree (eventid)\nForeign-key constraints:\n \"transactions_accountid_fkey\" FOREIGN KEY (accountid) REFERENCES\naccounts(accountid) DEFERRABLE\n \"transactions_eventid_fkey\" FOREIGN KEY (eventid) REFERENCES\nevents(eventid) DEFERRABLE\n \"transactions_ruleid_fkey\" FOREIGN KEY (ruleid) REFERENCES rules(ruleid)\nDEFERRABLE\n\n\n-- \nBest regards,\n\nJoel Jacobson\n\nI came across a strange problem when writing a plpgsql function.Why won't the query planner realize it would be a lot faster to use the \"index_transactions_accountid_currency\" index instead of using the \"transactions_pkey\" index in the queries below?\nThe LIMIT 1 part of the query slows it down from 0.07 ms to 1023 ms.Is this a bug? I'm using version 8.4.1.db=# SELECT TransactionID FROM Transactions WHERE AccountID = 108 AND Currency = 'SEK' ORDER BY TransactionID;\n transactionid --------------- 2870130\n 2870164 3371529 3371545\n 3371565(5 rows)\ndb=# EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE AccountID = 108 AND Currency = 'SEK' ORDER BY TransactionID;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=27106.33..27134.69 rows=11345 width=4) (actual time=0.048..0.049 rows=5 loops=1) Sort Key: transactionid\n Sort Method: quicksort Memory: 25kB -> Bitmap Heap Scan on transactions (cost=213.39..26342.26 rows=11345 width=4) (actual time=0.033..0.039 rows=5 loops=1)\n Recheck Cond: ((accountid = 108) AND (currency = 'SEK'::bpchar)) -> Bitmap Index Scan on index_transactions_accountid_currency (cost=0.00..210.56 rows=11345 width=0) (actual time=0.027..0.027 rows=5 loops=1)\n Index Cond: ((accountid = 108) AND (currency = 'SEK'::bpchar)) Total runtime: 0.070 ms\n(8 rows)db=# SELECT TransactionID FROM Transactions WHERE AccountID = 108 AND Currency = 'SEK' ORDER BY TransactionID LIMIT 1;\n transactionid --------------- 2870130\n(1 row)db=# EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE AccountID = 108 AND Currency = 'SEK' ORDER BY TransactionID LIMIT 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..43.46 rows=1 width=4) (actual time=1023.213..1023.214 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..493029.74 rows=11345 width=4) (actual time=1023.212..1023.212 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar)) Total runtime: 1023.244 ms\n(4 rows)db=# \\d transactions\n Table \"public.transactions\" Column | Type | Modifiers \n-------------------------------+--------------------------+------------------------------------------------------- transactionid | integer | not null default nextval('seqtransactions'::regclass)\n eventid | integer | not null ruleid | integer | not null\n accountid | integer | not null amount | numeric | not null\n balance | numeric | not null currency | character(3) | not null\n recorddate | timestamp with time zone | not null default now()Indexes:\n \"transactions_pkey\" PRIMARY KEY, btree (transactionid) \"index_transactions_accountid_currency\" btree (accountid, currency)\n \"index_transactions_eventid\" btree (eventid)Foreign-key constraints:\n \"transactions_accountid_fkey\" FOREIGN KEY (accountid) REFERENCES accounts(accountid) DEFERRABLE \"transactions_eventid_fkey\" FOREIGN KEY (eventid) REFERENCES events(eventid) DEFERRABLE\n \"transactions_ruleid_fkey\" FOREIGN KEY (ruleid) REFERENCES rules(ruleid) DEFERRABLE\n-- Best regards,Joel Jacobson",
"msg_date": "Fri, 2 Apr 2010 20:19:00 +0200",
"msg_from": "Joel Jacobson <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIMIT causes planner to do Index Scan using a less optimal index"
},
{
"msg_contents": "On Fri, Apr 2, 2010 at 2:19 PM, Joel Jacobson <[email protected]> wrote:\n> Is this a bug? I'm using version 8.4.1.\n\nIt's not really a bug, but it's definitely not a feature either.\n\n> Limit (cost=0.00..43.46 rows=1 width=4) (actual time=1023.213..1023.214\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..493029.74 rows=11345 width=4) (actual time=1023.212..1023.212\n> rows=1 loops=1)\n> Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> Total runtime: 1023.244 ms\n> (4 rows)\n\nThe planner's idea here is that rows matching the filter criteria will\nbe common enough that an index scan over transactions_pkey will find\none fairly quickly, at which point the executor can return that row\nand stop. But it turns out that those rows aren't as common as the\nplanner thinks, so the search takes a long time.\n\n...Robert\n",
"msg_date": "Tue, 6 Apr 2010 14:33:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
}
] |
[
{
"msg_contents": "Hi there,\r\n\r\n \r\n\r\nAbout a year ago we setup a machine with sixteen 15k disk spindles on Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris, we want to move away (we are more familiar with Linux anyway).\r\n\r\n \r\n\r\nSo the plan is to move to Linux and put the data on a SAN using iSCSI (two or four network interfaces). This however leaves us with with 16 very nice disks dooing nothing. Sound like a wast of time. If we were to use Solaris, ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem with those features (ZFS on fuse it not really an option).\r\n\r\n \r\n\r\nSo I was thinking: Why not make a big fat array using 14 disks (raid 1, 10 or 5), and make this a big and fast swap disk. Latency will be lower than the SAN can provide, and throughput will also be better, and it will relief the SAN from a lot of read iops.\r\n\r\n \r\n\r\nSo I could create a 1TB swap disk, and put it onto the OS next to the 64GB of memory. Then I can set Postgres to use more than the RAM size so it will start swapping. It would appear to postgres that the complete database will fit into memory. The question is: will this do any good? And if so: what will happen?\r\n\r\n \r\n\r\nKind regards,\r\n\r\n \r\n\r\nChristiaan\r\n\n\n\n\n\nUsing high speed swap to improve performance?\n\n\n\nHi there, About a year ago we setup a machine with sixteen 15k disk spindles on Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris, we want to move away (we are more familiar with Linux anyway). So the plan is to move to Linux and put the data on a SAN using iSCSI (two or four network interfaces). This however leaves us with with 16 very nice disks dooing nothing. Sound like a wast of time. If we were to use Solaris, ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem with those features (ZFS on fuse it not really an option). So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10 or 5), and make this a big and fast swap disk. Latency will be lower than the SAN can provide, and throughput will also be better, and it will relief the SAN from a lot of read iops. So I could create a 1TB swap disk, and put it onto the OS next to the 64GB of memory. Then I can set Postgres to use more than the RAM size so it will start swapping. It would appear to postgres that the complete database will fit into memory. The question is: will this do any good? And if so: what will happen? Kind regards, Christiaan",
"msg_date": "Fri, 2 Apr 2010 21:15:00 +0200",
"msg_from": "=?windows-1252?Q?Christiaan_Willemsen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using high speed swap to improve performance?"
},
{
"msg_contents": "What about FreeBSD with ZFS? I have no idea which features they support \nand which not, but it at least is a bit more free than Solaris and still \noffers that very nice file system.\n\nBest regards,\n\nArjen\n\nOn 2-4-2010 21:15 Christiaan Willemsen wrote:\n> Hi there,\n>\n> About a year ago we setup a machine with sixteen 15k disk spindles on\n> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up\n> Solaris, we want to move away (we are more familiar with Linux anyway).\n>\n> So the plan is to move to Linux and put the data on a SAN using iSCSI\n> (two or four network interfaces). This however leaves us with with 16\n> very nice disks dooing nothing. Sound like a wast of time. If we were to\n> use Solaris, ZFS would have a solution: use it as L2ARC. But there is no\n> Linux filesystem with those features (ZFS on fuse it not really an option).\n>\n> So I was thinking: Why not make a big fat array using 14 disks (raid 1,\n> 10 or 5), and make this a big and fast swap disk. Latency will be lower\n> than the SAN can provide, and throughput will also be better, and it\n> will relief the SAN from a lot of read iops.\n>\n> So I could create a 1TB swap disk, and put it onto the OS next to the\n> 64GB of memory. Then I can set Postgres to use more than the RAM size so\n> it will start swapping. It would appear to postgres that the complete\n> database will fit into memory. The question is: will this do any good?\n> And if so: what will happen?\n>\n> Kind regards,\n>\n> Christiaan\n>\n",
"msg_date": "Fri, 02 Apr 2010 21:53:08 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen\n<[email protected]> wrote:\n> About a year ago we setup a machine with sixteen 15k disk spindles on\n> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,\n> we want to move away (we are more familiar with Linux anyway).\n>\n> So the plan is to move to Linux and put the data on a SAN using iSCSI (two\n> or four network interfaces). This however leaves us with with 16 very nice\n> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,\n> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem\n> with those features (ZFS on fuse it not really an option).\n>\n> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10\n> or 5), and make this a big and fast swap disk. Latency will be lower than\n> the SAN can provide, and throughput will also be better, and it will relief\n> the SAN from a lot of read iops.\n>\n> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB\n> of memory. Then I can set Postgres to use more than the RAM size so it will\n> start swapping. It would appear to postgres that the complete database will\n> fit into memory. The question is: will this do any good? And if so: what\n> will happen?\n\nI suspect it will result in lousy performance because neither PG nor\nthe OS will understand that some of that \"memory\" is actually disk.\nBut if you end up testing it post the results back here for\nposterity...\n\n...Robert\n",
"msg_date": "Sun, 4 Apr 2010 16:52:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "On Sun, Apr 4, 2010 at 4:52 PM, Robert Haas <[email protected]> wrote:\n> On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen\n> <[email protected]> wrote:\n>> About a year ago we setup a machine with sixteen 15k disk spindles on\n>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,\n>> we want to move away (we are more familiar with Linux anyway).\n>>\n>> So the plan is to move to Linux and put the data on a SAN using iSCSI (two\n>> or four network interfaces). This however leaves us with with 16 very nice\n>> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,\n>> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem\n>> with those features (ZFS on fuse it not really an option).\n>>\n>> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10\n>> or 5), and make this a big and fast swap disk. Latency will be lower than\n>> the SAN can provide, and throughput will also be better, and it will relief\n>> the SAN from a lot of read iops.\n>>\n>> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB\n>> of memory. Then I can set Postgres to use more than the RAM size so it will\n>> start swapping. It would appear to postgres that the complete database will\n>> fit into memory. The question is: will this do any good? And if so: what\n>> will happen?\n>\n> I suspect it will result in lousy performance because neither PG nor\n> the OS will understand that some of that \"memory\" is actually disk.\n> But if you end up testing it post the results back here for\n> posterity...\n\nErr, the OS will understand it, but PG will not.\n\n...Robert\n",
"msg_date": "Sun, 4 Apr 2010 16:53:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "On Fri, Apr 2, 2010 at 1:15 PM, Christiaan Willemsen\n<[email protected]> wrote:\n> Hi there,\n>\n> About a year ago we setup a machine with sixteen 15k disk spindles on\n> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,\n> we want to move away (we are more familiar with Linux anyway).\n>\n> So the plan is to move to Linux and put the data on a SAN using iSCSI (two\n> or four network interfaces). This however leaves us with with 16 very nice\n> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,\n> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem\n> with those features (ZFS on fuse it not really an option).\n>\n> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10\n> or 5), and make this a big and fast swap disk. Latency will be lower than\n> the SAN can provide, and throughput will also be better, and it will relief\n> the SAN from a lot of read iops.\n>\n> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB\n> of memory. Then I can set Postgres to use more than the RAM size so it will\n> start swapping. It would appear to postgres that the complete database will\n> fit into memory. The question is: will this do any good? And if so: what\n> will happen?\n\nI'd make a couple of RAID-10s out of it and use them for highly used\ntables and / or indexes etc...\n",
"msg_date": "Sun, 4 Apr 2010 15:07:54 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "Christiaan Willemsen wrote:\n> About a year ago we setup a machine with sixteen 15k disk spindles on \n> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up \n> Solaris, we want to move away (we are more familiar with Linux anyway).\n\nWhat evidence do you have that Oracle is \"closing up\" Solaris?\n<http://www.oracle.com/us/products/servers-storage/solaris/index.html>\nand its links, particularly\n<http://www.sun.com/software/solaris/10/index.jsp>\nseem to indicate otherwise.\n\nIndustry analysis seems to support the continuance of Solaris, too:\n<http://jeremy.linuxquestions.org/2010/02/03/oracle-sun-merger-closes/>\n\"... it would certainly appear that Oracle is committed to the Solaris \nplatform indefinitely.\"\n\nMore recently, less than a week ago as I write this, there was the article\n<http://news.yahoo.com/s/nf/20100330/tc_nf/72477>\nwhich discusses that Oracle may move away from open-sourcing Solaris, but \nindicates that Oracle remains committed to Solaris as a for-pay product, and \nalso assesses a rosy future for Java.\n\n-- \nLew\n",
"msg_date": "Sun, 04 Apr 2010 17:17:26 -0400",
"msg_from": "Lew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "Christiaan Willemsen wrote:\n>\n> So I was thinking: Why not make a big fat array using 14 disks (raid \n> 1, 10 or 5), and make this a big and fast swap disk. Latency will be \n> lower than the SAN can provide, and throughput will also be better, \n> and it will relief the SAN from a lot of read iops.\n>\n\nPresuming that swap will give predictable performance as things go into \nand out of there doesn't sound like a great idea to me. Have you \nconsidered adding that space as a tablespace and setting \ntemp_tablespaces to point to it? That's the best thing I can think of \nto use a faster local disk with lower integrity guarantees for.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 05 Apr 2010 10:55:10 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "On Sun, Apr 4, 2010 at 3:17 PM, Lew <[email protected]> wrote:\n> Christiaan Willemsen wrote:\n>>\n>> About a year ago we setup a machine with sixteen 15k disk spindles on\n>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,\n>> we want to move away (we are more familiar with Linux anyway).\n>\n> What evidence do you have that Oracle is \"closing up\" Solaris?\n\nI don't think the other poster mean shutting down solaris, that would\nbe insane. I think he meant closing it, as in taking it closed\nsource, which there is ample evidence for.\n",
"msg_date": "Tue, 6 Apr 2010 12:48:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
},
{
"msg_contents": "Christiaan Willemsen wrote:\n>>> About a year ago we setup a machine with sixteen 15k disk spindles on\n>>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,\n>>> we want to move away (we are more familiar with Linux anyway).\n\nLew <[email protected]> wrote:\n>> What evidence do you have that Oracle is \"closing up\" Solaris?\n\nScott Marlowe wrote:\n> I don't think the other poster mean shutting down solaris, that would\n> be insane. I think he meant closing it, as in taking it closed\n> source, which there is ample evidence for.\n\nOh, that makes sense. Yes, it does seem that they're doing that.\n\nSome press hints that Oracle might keep OpenSolaris going, forked from the \nfor-pay product. If that really is true, I speculate that Oracle might be \nemulating the strategy in such things as Apache Geronimo - turn the \nopen-source side loose on the world under a license that lets you dip into it \nfor code in the closed-source product. Innovation flows to the closed-source \nproduct rather than from it. This empowers products like WebSphere \nApplication Server, which includes a lot of reworked Apache code in the \npersistence layer, the web-services stack, the app-server engine and elsewhere.\n\nI don't know Oracle's plans, but that sure would be a good move for them.\n\nFor me, I am quite satisfied with Linux. I don't really know what the value \nproposition is for Solaris anyway.\n\n-- \nLew\n",
"msg_date": "Wed, 07 Apr 2010 18:09:36 -0400",
"msg_from": "Lew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using high speed swap to improve performance?"
}
] |
[
{
"msg_contents": "Does the psql executable have any ability to do a \"fetch many\", using a server-side named cursor, when returning results? It seems like it tries to retrieve the query entirely to local memory before printing to standard out.\n\nSpecifically, I've tried running the following command on my desktop, which returns about 70 million lines:\n echo \"select [thirteen columns, all floating point and boolean types] from go_prediction_view_mouse gpv order by combined_score desc nulls last\" | psql -U [username] -h [remote host] [database] > mouse_predictions.tab\n\nThe command increases in memory usage until about 12GB, when it stops and there is significant disk activity (assume paging to disk). Closing a few big programs immediately causes it to increase its memory usage accordingly. After about 50 minutes, I killed it.\n\nIf I instead run the Python program below, which simply performs the query using a fetchmany() call to retrieve a few hundred thousand tuples at a time through a named server-side cursor, the program remains under about 20 MB of memory usage throughout and finishes in about 35 minutes.\n\nI know that the query used here could have been a COPY statement, which I assume would be better-behaved, but I'm more concerned about the case in which the query is more complex.\n\nThe local (OSX 10.6.2) version of Postgres is 8.4.2, and the server's (Ubuntu 4.x) version of Postgres is 8.3.5.\n\n\n------ Python source to use a named server-side cursor to dump a large number of rows to a file ------\n\nimport psycopg2\nimport psycopg2.extensions\n\n#---Modify this section---#\nspecies = 'mouse'\nquery = 'select go_term_ref, gene_remote_id, function_verified_exactly, function_verified_with_parent_go, function_verified_with_child_go, combined_score, obozinski_score, lee_score, mostafavi_score, guan_score, kim_score, joshi_score, tasan_score, tasan_revised_score, qi_score, leone_score from go_prediction_view_' + species + ' gpv order by combined_score desc nulls last'\noutputFilePath = [*output file path*]\nconnectionString = [*connection string*]\nqueryBufferSize = 10000\n\ndef processRow(row):\n # not important\n#---End modify this section----#\n\n\n#---Everything below should be genetic---#\nconn = psycopg2.connect(connectionString);\ncur = conn.cursor('temp_dump_cursor')\ncur.arraysize = queryBufferSize\n\ndef resultIter(cursor):\n 'An iterator that uses fetchmany to keep memory usage down'\n done = False\n while not done:\n results = cursor.fetchmany(queryBufferSize)\n if not results or len(results) == 0:\n done = True\n for result in results:\n yield result\n\nwith open(outputFilePath, 'w') as file:\n print 'Starting ' + query + '...'\n cur.execute(query)\n i = 0\n for row in resultIter(cur):\n i += 1\n row = processRow(row)\n file.write('\\t'.join(row) + '\\n')\n \n if i % queryBufferSize == 0:\n print 'Wrote ' + str(i) + ' lines.'\n\nprint 'finished. Total of ' + str(i) + ' lines.'\ncur.close()\nconn.close()",
"msg_date": "Fri, 2 Apr 2010 16:28:30 -0400",
"msg_from": "\"Beaver, John E\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does the psql executable support a \"fetch many\" approach when\n\tdumping large queries to stdout?"
},
{
"msg_contents": "> Does the psql executable have any ability to do a \"fetch many\", using a \n> server-side named cursor, when returning results? It seems like it tries \n> to retrieve the query entirely to local memory before printing to \n> standard out.\n\nI think it grabs the whole result set to calculate the display column \nwidths. I think there is an option to tweak this but don't remember which, \nhave a look at the psql commands (\\?), formatting section.\n",
"msg_date": "Fri, 02 Apr 2010 22:43:16 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *** PROBABLY SPAM *** Does the psql executable support\n\ta \"fetch many\" approach when dumping large queries to stdout?"
},
{
"msg_contents": "On Fri, 2010-04-02 at 16:28 -0400, Beaver, John E wrote:\n...\n\n> I know that the query used here could have been a COPY statement, which I assume would\n> be better-behaved, but I'm more concerned about the case in which the query is more complex.\n\nCOPY can copy out results of a SELECT query as well.\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Sat, 03 Apr 2010 01:18:08 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does the psql executable support a \"fetch many\"\n\tapproach when dumping large queries to stdout?"
},
{
"msg_contents": "\n\n\n\n\nAh, you're right. Thanks Hannu, that's a good solution.\n\nHannu Krosing wrote:\n\nOn Fri, 2010-04-02 at 16:28 -0400, Beaver, John E wrote:\n...\n\n \n\nI know that the query used here could have been a COPY statement, which I assume would\n be better-behaved, but I'm more concerned about the case in which the query is more complex.\n \n\n\nCOPY can copy out results of a SELECT query as well.\n\n \n\n\n-- \nJohn E. Beaver\nBioinformatics Developer\nHarvard Medical School\n\n\n",
"msg_date": "Fri, 2 Apr 2010 18:18:46 -0400",
"msg_from": "John Beaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does the psql executable support a \"fetch many\" approach\n\twhen dumping large queries to stdout?"
},
{
"msg_contents": "\n\n\n\n\nThat makes sense. I'll just use a COPY statement instead like Hannu\nsuggests.\n\nPierre C wrote:\n\n\nDoes the psql executable have any ability to do a \"fetch many\", using a \nserver-side named cursor, when returning results? It seems like it tries \nto retrieve the query entirely to local memory before printing to \nstandard out.\n \n\n\nI think it grabs the whole result set to calculate the display column \nwidths. I think there is an option to tweak this but don't remember which, \nhave a look at the psql commands (\\?), formatting section.\n \n\n\n-- \nJohn E. Beaver\nBioinformatics Developer\nHarvard Medical School\n\n\n",
"msg_date": "Fri, 2 Apr 2010 18:19:38 -0400",
"msg_from": "John Beaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *** PROBABLY SPAM *** Does the psql executable support\n\ta \"fetch many\" approach when dumping large queries to stdout?"
},
{
"msg_contents": "Pierre C wrote:\n> > Does the psql executable have any ability to do a \"fetch many\", using a \n> > server-side named cursor, when returning results? It seems like it tries \n> > to retrieve the query entirely to local memory before printing to \n> > standard out.\n> \n> I think it grabs the whole result set to calculate the display column \n> widths. I think there is an option to tweak this but don't remember which, \n> have a look at the psql commands (\\?), formatting section.\n\nSee the FETCH_COUNT variable mentioned in the psql manual page.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n",
"msg_date": "Mon, 12 Apr 2010 22:49:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *** PROBABLY SPAM *** Does the psql executable\n\tsupport a \"fetch many\" approach when dumping large queries to stdout?"
}
] |
[
{
"msg_contents": "LinkedIn\n------------David Crooke requested to add you as a connection on LinkedIn:\n------------------------------------------\n\nDimi,\n\nI'd like to add you to my professional network on LinkedIn.\n\n- David Crooke\n\nAccept invitation from David Crooke\nhttp://www.linkedin.com/e/pkJZe8TYN23PC0V37kBBjDgscEaP6Wqep-btW4e1lt1E-6xG/blk/I1937892965_2/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYOnPkSej8Ve3sPej59bRtMt3tFl3dPbPARc34OdzoVcj4LrCBxbOYWrSlI/EML_comm_afe/\n\nView invitation from David Crooke\nhttp://www.linkedin.com/e/pkJZe8TYN23PC0V37kBBjDgscEaP6Wqep-btW4e1lt1E-6xG/blk/I1937892965_2/39vdjoVczAUdPcVckALqnpPbOYWrSlI/svi/\n\n------------------------------------------ \nDID YOU KNOW your LinkedIn profile helps you control your public image when people search for you? Setting your profile as public means your LinkedIn profile will come up when people enter your name in leading search engines. Take control of your image! \nhttp://www.linkedin.com/e/ewp/inv-22/\n\n \n------\n(c) 2010, LinkedIn Corporation\n\n\n\nLinkedIn\n\nDavid Crooke requested to add you as a connection on LinkedIn:\n\n Dimi,\n\nI'd like to add you to my professional network on LinkedIn.\n\n- David Crooke\n \n \n\n\n\nAccept\n\n\nView invitation from David Crooke\n\n\n\n\n \n\n DID YOU KNOW your LinkedIn profile helps you control your public image when people search for you? Setting your profile as public means your LinkedIn profile will come up when people enter your name in leading search engines. Take control of your image! \n \n\n© 2010, LinkedIn Corporation",
"msg_date": "Fri, 2 Apr 2010 22:05:14 -0700 (PDT)",
"msg_from": "David Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "David Crooke wants to stay in touch on LinkedIn"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am facing the error \"cache lookup failed for relation X\" in Postgres-8.4.2\nversion. As you all know, its a reproducable and below is the example.\n\nThis can be generated with two sessions;\nAm opening two sessions here Session A and Session B\nSession A\n=========\nstep 1 - creating the table\n\npostgres=# create table cache(id integer);\n\nstep 2 - Create the trigger and the function on particular table\n\npostgres=# CREATE FUNCTION cachefunc() RETURNS trigger AS $$\n BEGIN\n RETURN NEW;\n END; $$ LANGUAGE plpgsql;\n\npostgres=# CREATE TRIGGER cachetrig BEFORE INSERT ON cache FOR EACH ROW\nEXECUTE PROCEDURE cachefunc();\n\nStep 3 - Inserting a row in a table\n\npostgres=# insert into cache values (1);\n\nStep 4 - Droping the table in BEGIN block and issuing the same drop in\nanother session B\n\npostgres=# begin;\npostgres=# drop table cache;\n\nstep 5 - Open the second session B and issue the same command\n\npostgres=# drop table cache; --- In session B, this will wait untill commit\nis issued by the Session A.\n\nstep 6 - Issue the commit in Session A\n\npostgres=# commit;\n\nStep -7 now we can the see the error in the session B\n\nERROR: cache lookup failed for relation X\n\nCould plese tell me, why this is generated and what is the cause.\n\nThanks in advance\n\nRegards\nRaghavendra\n\nHi All,\n \nI am facing the error \"cache lookup failed for relation X\" in Postgres-8.4.2 version. As you all know, its a reproducable and below is the example. \n \nThis can be generated with two sessions;\nAm opening two sessions here Session A and Session B\nSession A=========step 1 - creating the table\n \npostgres=# create table cache(id integer);\nstep 2 - Create the trigger and the function on particular table\n \npostgres=# CREATE FUNCTION cachefunc() RETURNS trigger AS $$ BEGIN RETURN NEW; END; $$ LANGUAGE plpgsql;\n \npostgres=# CREATE TRIGGER cachetrig BEFORE INSERT ON cache FOR EACH ROW EXECUTE PROCEDURE cachefunc();\n \nStep 3 - Inserting a row in a table\n \npostgres=# insert into cache values (1);\n \nStep 4 - Droping the table in BEGIN block and issuing the same drop in another session B\n \npostgres=# begin; postgres=# drop table cache;\n \nstep 5 - Open the second session B and issue the same command\n \npostgres=# drop table cache; --- In session B, this will wait untill commit is issued by the Session A.\n \nstep 6 - Issue the commit in Session A\n \npostgres=# commit;\n \nStep -7 now we can the see the error in the session B\n \nERROR: cache lookup failed for relation X\nCould plese tell me, why this is generated and what is the cause.\n \nThanks in advance\n \nRegards\nRaghavendra",
"msg_date": "Sat, 3 Apr 2010 22:08:11 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: cache lookup failed for relation X"
},
{
"msg_contents": "raghavendra t <[email protected]> writes:\n> I am facing the error \"cache lookup failed for relation X\" in Postgres-8.4.2\n> [ when dropping the same table concurrently in two sessions ]\n> Could plese tell me, why this is generated and what is the cause.\n\n From the perspective of session B, the table disappeared after it had\nalready found the table in the catalogs and obtained a lock on it.\nThis is not readily distinguishable from a serious problem such as\ncatalog corruption. While we could play dumb and just issue a\n\"table does not exist\" message as though we'd never found it in the\ncatalog at all, that behavior could mask real problems, so it seems\nbetter to not try to hide that something unusual happened.\n\nIn general I would say that an application that is trying to do this\nhas got its own problems anyway --- how do you know which version of\nthe table you're dropping, and that that's the right thing?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Apr 2010 13:05:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: cache lookup failed for relation X "
},
{
"msg_contents": "Hi Tom,\n\nThank you for your Prompt reply..\n\nAm getting this error only in the application level that too with multiple\nsessions. By getting that error am assuming that the table has got dropped\nin some other session.\nI understood from here is that its not a serious problem to catalog\nCorruption.\nRegards\nRaghavendra\nOn Sat, Apr 3, 2010 at 10:35 PM, Tom Lane <[email protected]> wrote:\n\n> raghavendra t <[email protected]> writes:\n> > I am facing the error \"cache lookup failed for relation X\" in\n> Postgres-8.4.2\n> > [ when dropping the same table concurrently in two sessions ]\n> > Could plese tell me, why this is generated and what is the cause.\n>\n> From the perspective of session B, the table disappeared after it had\n> already found the table in the catalogs and obtained a lock on it.\n> This is not readily distinguishable from a serious problem such as\n> catalog corruption. While we could play dumb and just issue a\n> \"table does not exist\" message as though we'd never found it in the\n> catalog at all, that behavior could mask real problems, so it seems\n> better to not try to hide that something unusual happened.\n>\n> In general I would say that an application that is trying to do this\n> has got its own problems anyway --- how do you know which version of\n> the table you're dropping, and that that's the right thing?\n>\n> regards, tom lane\n>\n\nHi Tom,\n \nThank you for your Prompt reply..\n \nAm getting this error only in the application level that too with multiple sessions. By getting that error am assuming that the table has got dropped in some other session. \nI understood from here is that its not a serious problem to catalog Corruption.\nRegards\nRaghavendra\nOn Sat, Apr 3, 2010 at 10:35 PM, Tom Lane <[email protected]> wrote:\n\nraghavendra t <[email protected]> writes:> I am facing the error \"cache lookup failed for relation X\" in Postgres-8.4.2\n> [ when dropping the same table concurrently in two sessions ]\n> Could plese tell me, why this is generated and what is the cause.From the perspective of session B, the table disappeared after it hadalready found the table in the catalogs and obtained a lock on it.\nThis is not readily distinguishable from a serious problem such ascatalog corruption. While we could play dumb and just issue a\"table does not exist\" message as though we'd never found it in the\ncatalog at all, that behavior could mask real problems, so it seemsbetter to not try to hide that something unusual happened.In general I would say that an application that is trying to do thishas got its own problems anyway --- how do you know which version of\nthe table you're dropping, and that that's the right thing? regards, tom lane",
"msg_date": "Sun, 4 Apr 2010 08:20:12 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: cache lookup failed for relation X"
}
] |
[
{
"msg_contents": "Because You dropped/deleted the table cache in Session A. \n\nThe simplest way to look at it is Session B was lock out when the Drop table command was issued from Session A. Now when session B finally got its chance to drop/delete the table it was already gone .\n\nWhat kind error were you expecting from Postgresql to Return when it can't find the table??? \n\nIn the future please don't cross post to multiple lists. \n\n---- Message from mailto:[email protected] raghavendra t [email protected] at 04-03-2010 10:08:11 PM ------\n\n\n\nstep 6 - Issue the commit in Session A \n\npostgres=# commit; \n\nStep -7 nowwe can the see the error in the session B \n\nERROR: cache lookup failed for relation X \n\nCould plese tell me, why this is generated and what is the cause.\n\n\nThanks in advance \n\nRegards \nRaghavendra \n\n\n\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored.\n\nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately.\nThank you.\n\nBecause You dropped/deleted the table cache in Session A. The simplest way to look at it is Session B was lock out when the Drop table command was issued from Session A. Now when session B finally got its chance to drop/delete the table it was already gone .What kind error were you expecting from Postgresql to Return when it can't find the table??? In the future please don't cross post to multiple lists. ---- Message from raghavendra t <[email protected]> at 04-03-2010 10:08:11 PM ------step 6 - Issue the commit in Session A\n \npostgres=# commit;\n \nStep -7 now we can the see the error in the session B\n \nERROR: cache lookup failed for relation X\nCould plese tell me, why this is generated and what is the cause.\n \nThanks in advance\n \nRegards\nRaghavendra\n\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored.\n\n\n\nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately.\n\n\nThank you.",
"msg_date": "Sat, 3 Apr 2010 13:04:57 -0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: cache lookup failed for relation X"
},
{
"msg_contents": ">\n> Hi Justin,\n\nThank you for your reply..\n\n\n> In the future please don't cross post to multiple lists.\n\n\nAppoligies for it...\n\nRegards\nRaghavendra\n\nOn Sat, Apr 3, 2010 at 10:34 PM, [email protected]\n<[email protected]>wrote:\n\n\n> Because You dropped/deleted the table cache in Session A.\n>\n> The simplest way to look at it is Session B was lock out when the Drop\n> table command was issued from Session A. Now when session B finally got its\n> chance to drop/delete the table it was already gone .\n>\n> What kind error were you expecting from Postgresql to Return when it can't\n> find the table???\n>\n> In the future please don't cross post to multiple lists.\n>\n> ---- Message from raghavendra t <[email protected]><[email protected]>at 04-03-2010 10:08:11 PM ------\n>\n>\n> step 6 - Issue the commit in Session A\n>\n> postgres=# commit;\n>\n> Step -7 now we can the see the error in the session B\n>\n> ERROR: cache lookup failed for relation X\n> Could plese tell me, why this is generated and what is the cause.\n>\n>\n> Thanks in advance\n>\n> Regards\n> Raghavendra\n>\n> All legitimate Magwerks Corporation quotations are sent in a .PDF file\n> attachment with a unique ID number generated by our proprietary quotation\n> system. Quotations received via any other form of communication will not be\n> honored.\n>\n> CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain\n> legally privileged, confidential or other information proprietary to\n> Magwerks Corporation and is intended solely for the use of the individual to\n> whom it addresses. If the reader of this e-mail is not the intended\n> recipient or authorized agent, the reader is hereby notified that any\n> unauthorized viewing, dissemination, distribution or copying of this e-mail\n> is strictly prohibited. If you have received this e-mail in error, please\n> notify the sender by replying to this message and destroy all occurrences of\n> this e-mail immediately.\n> Thank you.\n>\n>\n\n\nHi Justin,\n \nThank you for your reply..\n \nIn the future please don't cross post to multiple lists. \n \nAppoligies for it...\n \nRegards\nRaghavendra\n \nOn Sat, Apr 3, 2010 at 10:34 PM, [email protected] <[email protected]> wrote: \n \n\nBecause You dropped/deleted the table cache in Session A. \n \nThe simplest way to look at it is Session B was lock out when the Drop table command was issued from Session A. Now when session B finally got its chance to drop/delete the table it was already gone .\n \nWhat kind error were you expecting from Postgresql to Return when it can't find the table??? \n \nIn the future please don't cross post to multiple lists. \n \n---- Message from raghavendra t <[email protected]> at 04-03-2010 10:08:11 PM ------ \n\n\n \nstep 6 - Issue the commit in Session A\n \npostgres=# commit;\n \nStep -7 now we can the see the error in the session B\n \nERROR: cache lookup failed for relation X\n\nCould plese tell me, why this is generated and what is the cause.\n \n \nThanks in advance\n \nRegards\nRaghavendra\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored. \n \nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately. \nThank you.",
"msg_date": "Sun, 4 Apr 2010 08:22:00 +0530",
"msg_from": "raghavendra t <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: cache lookup failed for relation X"
}
] |
[
{
"msg_contents": "Hi, I have table with just on column named url (it's not real url,\njust random string for testing purposes), type text. I have lots of\nentries in it (it's dynamic, i add and remove them on the fly), 100\n000 and more. I've created index on this table to optimize\n\"searching\". I just want to test if some \"url\" is in in the table, so\ni am using this request:\n\nselect url from test2 where url ~* '^URLVALUE\\\\s*$';\n\nthere's \\\\s* because of padding. Here is the analyze:\n\npostgres=# explain analyze select url from test2 where url ~* '^zyxel\\\\s*$';\nWARNING: nonstandard use of \\\\ in a string literal\nLINE 1: ...plain analyze select url from test2 where url ~* '^zyxel\\\\s...\n ^\nHINT: Use the escape string syntax for backslashes, e.g., E'\\\\'.\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on test2 (cost=0.00..1726.00 rows=10 width=9) (actual\ntime=156.489..156.502 rows=1 loops=1)\n Filter: (url ~* '^zyxel\\\\s*$'::text)\n Total runtime: 156.538 ms\n(3 rows)\n\nIt takes 156 ms, it's too much for my purposes, so i want to decrease\nit. So what can I use for optimizing this request? Again, I just want\nto test if \"url\" (\"zyxel\" in this examlpe) is in the table.\n\nSome info:\n\nversion(): PostgreSQL 8.4.2 on i486-slackware-linux-gnu, compiled by\nGCC gcc (GCC) 4.3.3, 32-bit\nRam: 500 MB\nCPU: 2.6 Ghz (it's kvm virtualized, i don't know exact type, it's one core cpu)\n\nThank you.\n",
"msg_date": "Mon, 5 Apr 2010 16:28:35 +0200",
"msg_from": "Oliver Kindernay <[email protected]>",
"msg_from_op": true,
"msg_subject": "Huge table searching optimization"
},
{
"msg_contents": "On Mon, Apr 05, 2010 at 04:28:35PM +0200, Oliver Kindernay wrote:\n> Hi, I have table with just on column named url (it's not real url,\n> just random string for testing purposes), type text. I have lots of\n> entries in it (it's dynamic, i add and remove them on the fly), 100\n> 000 and more. I've created index on this table to optimize\n> \"searching\". I just want to test if some \"url\" is in in the table, so\n> i am using this request:\n> \n> select url from test2 where url ~* '^URLVALUE\\\\s*$';\n> \n> there's \\\\s* because of padding. Here is the analyze:\n> \n> postgres=# explain analyze select url from test2 where url ~* '^zyxel\\\\s*$';\n> WARNING: nonstandard use of \\\\ in a string literal\n> LINE 1: ...plain analyze select url from test2 where url ~* '^zyxel\\\\s...\n> ^\n> HINT: Use the escape string syntax for backslashes, e.g., E'\\\\'.\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------\n> Seq Scan on test2 (cost=0.00..1726.00 rows=10 width=9) (actual\n> time=156.489..156.502 rows=1 loops=1)\n> Filter: (url ~* '^zyxel\\\\s*$'::text)\n> Total runtime: 156.538 ms\n> (3 rows)\n> It takes 156 ms, it's too much for my purposes, so i want to decrease\n> it. So what can I use for optimizing this request? Again, I just want\n> to test if \"url\" (\"zyxel\" in this examlpe) is in the table.\n\nadd trigger to remove spaces from end of string on insert and update,\nand then use normal = operator.\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n",
"msg_date": "Mon, 5 Apr 2010 16:32:44 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge table searching optimization"
},
{
"msg_contents": "Hi,\n\nOn Monday 05 April 2010 16:28:35 Oliver Kindernay wrote:\n> Hi, I have table with just on column named url (it's not real url,\n> just random string for testing purposes), type text. I have lots of\n> entries in it (it's dynamic, i add and remove them on the fly), 100\n> 000 and more. I've created index on this table to optimize\n> \"searching\". I just want to test if some \"url\" is in in the table, so\n> i am using this request:\n> \n> select url from test2 where url ~* '^URLVALUE\\\\s*$';\n> \n> there's \\\\s* because of padding. Here is the analyze:\n> \n> postgres=# explain analyze select url from test2 where url ~*\n> '^zyxel\\\\s*$'; WARNING: nonstandard use of \\\\ in a string literal\n> LINE 1: ...plain analyze select url from test2 where url ~* '^zyxel\\\\s...\n> ^\n> HINT: Use the escape string syntax for backslashes, e.g., E'\\\\'.\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> ---------------------------- Seq Scan on test2 (cost=0.00..1726.00 rows=10\n> width=9) (actual\n> time=156.489..156.502 rows=1 loops=1)\n> Filter: (url ~* '^zyxel\\\\s*$'::text)\n> Total runtime: 156.538 ms\n> (3 rows)\n> \n> It takes 156 ms, it's too much for my purposes, so i want to decrease\n> it. So what can I use for optimizing this request? Again, I just want\n> to test if \"url\" (\"zyxel\" in this examlpe) is in the table.\n> \nDepending on your locale it might be sensible to create a text_pattern_ops \nindex - see the following link: \nhttp://www.postgresql.org/docs/current/static/indexes-opclass.html\n\nLike suggested by depesz it would be far better to remove the padding and do \nexact lookups though.\n\nAndres\n",
"msg_date": "Mon, 5 Apr 2010 18:10:57 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge table searching optimization"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On Monday 05 April 2010 16:28:35 Oliver Kindernay wrote:\n>> i am using this request:\n>> select url from test2 where url ~* '^URLVALUE\\\\s*$';\n\n> Depending on your locale it might be sensible to create a text_pattern_ops \n> index - see the following link: \n> http://www.postgresql.org/docs/current/static/indexes-opclass.html\n\ntext_pattern_ops won't help for a case-insensitive search. The best bet\nhere would be to index on a case-folded, blank-removed version of the\nurl, viz\n\n\tcreate index ... on (normalize(url))\n\n\tselect ... where normalize(url) = normalize('URLVALUE')\n\nwhere normalize() is a suitably defined function.\n\nOr if it's okay to only store the normalized form of the string,\nyou could simplify that a bit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Apr 2010 12:22:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge table searching optimization "
},
{
"msg_contents": "Thanks to all, now it is 0.061 ms :)\n\n2010/4/5 Tom Lane <[email protected]>:\n> Andres Freund <[email protected]> writes:\n>> On Monday 05 April 2010 16:28:35 Oliver Kindernay wrote:\n>>> i am using this request:\n>>> select url from test2 where url ~* '^URLVALUE\\\\s*$';\n>\n>> Depending on your locale it might be sensible to create a text_pattern_ops\n>> index - see the following link:\n>> http://www.postgresql.org/docs/current/static/indexes-opclass.html\n>\n> text_pattern_ops won't help for a case-insensitive search. The best bet\n> here would be to index on a case-folded, blank-removed version of the\n> url, viz\n>\n> create index ... on (normalize(url))\n>\n> select ... where normalize(url) = normalize('URLVALUE')\n>\n> where normalize() is a suitably defined function.\n>\n> Or if it's okay to only store the normalized form of the string,\n> you could simplify that a bit.\n>\n> regards, tom lane\n>\n",
"msg_date": "Mon, 5 Apr 2010 19:11:52 +0200",
"msg_from": "Oliver Kindernay <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge table searching optimization"
}
] |
[
{
"msg_contents": "The SELECT show below has been running for 30+mins and the strace output \nis alarming:\n\n[root@dione ~]# strace -p 10083\nProcess 10083 attached - interrupt to quit\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\ncreat(NULL, 037777777777) = 216203264\nrestart_syscall(<... resuming interrupted call ...>) = 8192\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\ncreat(umovestr: Input/output error\n0x2, 037777777777) = 1025220608\n\nhowever, I can find no indication of I/O errors in the postgres log.\nAnyone have any idea what's going on here?\n\npostgres 8.3.7\n\n[root@dione ~]# uname -a\nLinux dione.ca.com 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 \nx86_64 x86_64 x86_64 GNU/Linux\n\nThanks,\nBrian\n\n\ntop:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n10083 postgres 25 0 2288m 822m 817m R 100.0 1.7 28:08.79 postgres\n\n\ncemdb=# select procpid,query_start,current_query from pg_stat_activity;\n 10083 | 2010-04-05 17:18:34.989022-07 | select b.ts_id from \nts_stats_transetgroup_usergroup_daily b , ts_stats_transet_user_interval \nc, ts_transetgroup_transets_map tm, ts_usergroup_users_map um where \nb.ts_transet_group_id =\ntm.ts_transet_group_id and tm.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_usergroup_id = um.ts_usergroup_id \nand um.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2010-03-29 21:00' and \nc.ts_interval_start_time < '2010-03-29 22:00' and \nb.ts_interval_start_time >= '2010-03-29 00:00' and\nb.ts_interval_start_time < '2010-03-30 00:00'\n\ncemdb=# explain select b.ts_id from \nts_stats_transetgroup_usergroup_daily b , ts_stats_transet_user_interval \nc, ts_transetgroup_transets_map tm, ts_usergroup_users_map um where \nb.ts_transet_group_id =\ntm.ts_transet_group_id and tm.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and b.ts_usergroup_id = um.ts_usergroup_id \nand um.ts_user_incarnation_id = c.ts_user_incarnation_id and \nc.ts_interval_start_time >= '2010-03-29 21:00' and \nc.ts_interval_start_time < '2010-03-29 22:00' and \nb.ts_interval_start_time >= '2010-03-29 00:00' and\nb.ts_interval_start_time < '2010-03-30 00:00';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1169.95..7046.23 rows=1 width=8)\n Hash Cond: ((b.ts_transet_group_id = tm.ts_transet_group_id) AND \n(c.ts_transet_incarnation_id = tm.ts_transet_incarnation_id))\n -> Nested Loop (cost=1159.95..7036.15 rows=10 width=24)\n -> Nested Loop (cost=0.00..28.16 rows=6 width=24)\n -> Index Scan using \nts_stats_transetgroup_usergroup_daily_starttimeindex on \nts_stats_transetgroup_usergroup_daily b (cost=0.00..8.86 rows=1 width=24)\n Index Cond: ((ts_interval_start_time >= \n'2010-03-29 00:00:00-07'::timestamp with time zone) AND \n(ts_interval_start_time < '2010-03-30 00:00:00-07'::timestamp with time \nzone))\n -> Index Scan using \nts_usergroup_users_map_usergroupindex on ts_usergroup_users_map um \n(cost=0.00..19.23 rows=6 width=16)\n Index Cond: (um.ts_usergroup_id = b.ts_usergroup_id)\n -> Bitmap Heap Scan on ts_stats_transet_user_interval c \n(cost=1159.95..1167.97 rows=2 width=16)\n Recheck Cond: ((c.ts_user_incarnation_id = \num.ts_user_incarnation_id) AND (c.ts_interval_start_time >= '2010-03-29 \n21:00:00-07'::timestamp with time zone) AND (c.ts_interval_start_time < \n'2010-03-29 22:00:00-07'::timestamp with time zone))\n -> BitmapAnd (cost=1159.95..1159.95 rows=2 width=0)\n -> Bitmap Index Scan on \nts_stats_transet_user_interval_userincarnationidindex (cost=0.00..14.40 \nrows=438 width=0)\n Index Cond: (c.ts_user_incarnation_id = \num.ts_user_incarnation_id)\n -> Bitmap Index Scan on \nts_stats_transet_user_interval_starttime (cost=0.00..1134.09 rows=44856 \nwidth=0)\n Index Cond: ((c.ts_interval_start_time >= \n'2010-03-29 21:00:00-07'::timestamp with time zone) AND \n(c.ts_interval_start_time < '2010-03-29 22:00:00-07'::timestamp with \ntime zone))\n -> Hash (cost=7.00..7.00 rows=200 width=16)\n -> Seq Scan on ts_transetgroup_transets_map tm \n(cost=0.00..7.00 rows=200 width=16)\n(17 rows)\n\n",
"msg_date": "Mon, 05 Apr 2010 18:01:16 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "query slow; strace output worrisome"
},
{
"msg_contents": "On 6/04/2010 9:01 AM, Brian Cox wrote:\n> The SELECT show below has been running for 30+mins and the strace output\n> is alarming:\n>\n> [root@dione ~]# strace -p 10083\n> Process 10083 attached - interrupt to quit\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n> creat(NULL, 037777777777) = 216203264\n> restart_syscall(<... resuming interrupted call ...>) = 8192\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n> creat(umovestr: Input/output error\n> 0x2, 037777777777) = 1025220608\n>\n> however, I can find no indication of I/O errors in the postgres log.\n> Anyone have any idea what's going on here?\n\nAnything in `dmesg' (command) or /var/log/syslog ?\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 06 Apr 2010 09:53:51 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slow; strace output worrisome"
}
] |
[
{
"msg_contents": "On 04/05/2010 09:53 PM, Craig Ringer [[email protected]] wrote:\n> Anything in `dmesg' (command) or /var/log/syslog ?\nnothing out of the ordinary. Brian\n",
"msg_date": "Mon, 05 Apr 2010 21:04:07 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query slow; strace output worrisome"
},
{
"msg_contents": "Brian Cox wrote:\n> On 04/05/2010 09:53 PM, Craig Ringer [[email protected]] wrote:\n>> Anything in `dmesg' (command) or /var/log/syslog ?\n> nothing out of the ordinary. Brian\n\nI'm wondering if the issue is with strace rather than Pg. That is to\nsay, that strace is trying to print:\n\n\ncreat(\"/path/to/file\", 0x2, 037777777777) = 1025220608\n\n\n... but to get \"/path/to/file\" it's calling umovestr() which is\nreturning an I/O error, perhaps due to some kind of security framework\nlike AppArmor or SELinux in place on your system.\n\nYep, umovestr() is in util.c:857 in the strace 4.5.18 sources. It looks\nlike it might read from the target's memory via /proc or using ptrace\ndepending on build configuration. Either are obvious targets for\nsecurity framework limitations.\n\nSo what you're probably seeing is actually strace being prevented from\ngetting some information out of the postgres backend's memory by system\nsecurity policy.\n\nAs for what Pg is doing: creat() returns -1 on error and a file\ndescriptor otherwise, so the return value appears to indicate success.\nI'm kind of wondering what the Pg backend is actually _doing_ though -\nif it was using sort temp files you'd expect\nopen()/write()/read()/close() not just creat() calls.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 06 Apr 2010 13:18:17 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slow; strace output worrisome"
}
] |
[
{
"msg_contents": "Hi Scott,\r\n\r\n \r\n\r\nThat sound like a usefull thing to do, but the big advantage of the SAN is that in case the physical machine goes down, I can quickly startup a virtual machine using the same database files to act as a fallback. It will have less memory, and less CPU's but it will do fine for some time.\r\n\r\n \r\n\r\nSo when putting fast tables on local storage, I losse those tables when the machine goes down.\r\n\r\n \r\n\r\nPutting indexes on there however might me intresting.. What will Postgresql do when it is started on the backupmachine, and it finds out the index files are missing? Will it recreate those files, or will it panic and not start at all, or can we just manually reindex?\r\n\r\n \r\n\r\nKind regards,\r\n\r\n \r\n\r\nChristiaan\r\n \r\n-----Original message-----\r\nFrom: Scott Marlowe <[email protected]>\r\nSent: Sun 04-04-2010 23:08\r\nTo: Christiaan Willemsen <[email protected]>; \r\nCC: [email protected]; \r\nSubject: Re: [PERFORM] Using high speed swap to improve performance?\r\n\r\nOn Fri, Apr 2, 2010 at 1:15 PM, Christiaan Willemsen\r\n<[email protected]> wrote:\r\n> Hi there,\r\n>\r\n> About a year ago we setup a machine with sixteen 15k disk spindles on\r\n> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,\r\n> we want to move away (we are more familiar with Linux anyway).\r\n>\r\n> So the plan is to move to Linux and put the data on a SAN using iSCSI (two\r\n> or four network interfaces). This however leaves us with with 16 very nice\r\n> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,\r\n> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem\r\n> with those features (ZFS on fuse it not really an option).\r\n>\r\n> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10\r\n> or 5), and make this a big and fast swap disk. Latency will be lower than\r\n> the SAN can provide, and throughput will also be better, and it will relief\r\n> the SAN from a lot of read iops.\r\n>\r\n> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB\r\n> of memory. Then I can set Postgres to use more than the RAM size so it will\r\n> start swapping. It would appear to postgres that the complete database will\r\n> fit into memory. The question is: will this do any good? And if so: what\r\n> will happen?\r\n\r\nI'd make a couple of RAID-10s out of it and use them for highly used\r\ntables and / or indexes etc...\r\n\r\n\n\n\n\n\nRE: Re: [PERFORM] Using high speed swap to improve performance?\n\n\n\nHi Scott, That sound like a usefull thing to do, but the big advantage of the SAN is that in case the physical machine goes down, I can quickly startup a virtual machine using the same database files to act as a fallback. It will have less memory, and less CPU's but it will do fine for some time. So when putting fast tables on local storage, I losse those tables when the machine goes down. Putting indexes on there however might me intresting.. What will Postgresql do when it is started on the backupmachine, and it finds out the index files are missing? Will it recreate those files, or will it panic and not start at all, or can we just manually reindex? Kind regards, Christiaan -----Original message-----From: Scott Marlowe <[email protected]>Sent: Sun 04-04-2010 23:08To: Christiaan Willemsen <[email protected]>; CC: [email protected]; Subject: Re: [PERFORM] Using high speed swap to improve performance?On Fri, Apr 2, 2010 at 1:15 PM, Christiaan Willemsen<[email protected]> wrote:> Hi there,>> About a year ago we setup a machine with sixteen 15k disk spindles on> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,> we want to move away (we are more familiar with Linux anyway).>> So the plan is to move to Linux and put the data on a SAN using iSCSI (two> or four network interfaces). This however leaves us with with 16 very nice> disks dooing nothing. Sound like a wast of time. If we were to use Solaris,> ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem> with those features (ZFS on fuse it not really an option).>> So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10> or 5), and make this a big and fast swap disk. Latency will be lower than> the SAN can provide, and throughput will also be better, and it will relief> the SAN from a lot of read iops.>> So I could create a 1TB swap disk, and put it onto the OS next to the 64GB> of memory. Then I can set Postgres to use more than the RAM size so it will> start swapping. It would appear to postgres that the complete database will> fit into memory. The question is: will this do any good? And if so: what> will happen?I'd make a couple of RAID-10s out of it and use them for highly usedtables and / or indexes etc...",
"msg_date": "Tue, 6 Apr 2010 09:38:54 +0200",
"msg_from": "=?windows-1252?Q?Christiaan_Willemsen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using high speed swap to improve performance?"
}
] |
[
{
"msg_contents": "On 04/06/2010 01:18 AM, Craig Ringer [[email protected]] wrote:\n> I'm wondering if the issue is with strace rather than Pg. That is to\n> say, that strace is trying to print:\nThanks, Craig: I do think that this is a strace issue.\n\n> As for what Pg is doing: creat() returns -1 on error and a file\n> descriptor otherwise, so the return value appears to indicate success.\n> I'm kind of wondering what the Pg backend is actually _doing_ though -\n> if it was using sort temp files you'd expect\n> open()/write()/read()/close() not just creat() calls.\nMy quesiton exactly: what is the backend doing calling creat() over and \nover again? Note that this query does complete -- in 30-40 mins.\n\nBrian\n",
"msg_date": "Tue, 06 Apr 2010 09:24:47 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query slow; strace output worrisome"
},
{
"msg_contents": "On 7/04/2010 12:24 AM, Brian Cox wrote:\n> On 04/06/2010 01:18 AM, Craig Ringer [[email protected]] wrote:\n>> I'm wondering if the issue is with strace rather than Pg. That is to\n>> say, that strace is trying to print:\n> Thanks, Craig: I do think that this is a strace issue.\n>\n>> As for what Pg is doing: creat() returns -1 on error and a file\n>> descriptor otherwise, so the return value appears to indicate success.\n>> I'm kind of wondering what the Pg backend is actually _doing_ though -\n>> if it was using sort temp files you'd expect\n>> open()/write()/read()/close() not just creat() calls.\n> My quesiton exactly: what is the backend doing calling creat() over and\n> over again? Note that this query does complete -- in 30-40 mins.\n\nI'd assume it was tempfile creation, but for the fact that there's \nnothing but creat() calls.\n\nHowever, we can't trust strace. There may be more going on that is being \nhidden from strace via limitations on the ptrace syscall imposed by \nSELinux / AppArmor / whatever.\n\nI suggest turning on the logging options in Pg that report use of \ntempfiles and disk-spilled sorts, then have a look and see if Pg is in \nfact using on-disk temp files for sorts or materialized data sets.\n\nIf it is, you might find that increasing work_mem helps your query out.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 07 Apr 2010 10:32:52 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slow; strace output worrisome"
},
{
"msg_contents": "On Tue, Apr 6, 2010 at 10:32 PM, Craig Ringer\n<[email protected]> wrote:\n> On 7/04/2010 12:24 AM, Brian Cox wrote:\n>>\n>> On 04/06/2010 01:18 AM, Craig Ringer [[email protected]] wrote:\n>>>\n>>> I'm wondering if the issue is with strace rather than Pg. That is to\n>>> say, that strace is trying to print:\n>>\n>> Thanks, Craig: I do think that this is a strace issue.\n>>\n>>> As for what Pg is doing: creat() returns -1 on error and a file\n>>> descriptor otherwise, so the return value appears to indicate success.\n>>> I'm kind of wondering what the Pg backend is actually _doing_ though -\n>>> if it was using sort temp files you'd expect\n>>> open()/write()/read()/close() not just creat() calls.\n>>\n>> My quesiton exactly: what is the backend doing calling creat() over and\n>> over again? Note that this query does complete -- in 30-40 mins.\n>\n> I'd assume it was tempfile creation, but for the fact that there's nothing\n> but creat() calls.\n>\n> However, we can't trust strace. There may be more going on that is being\n> hidden from strace via limitations on the ptrace syscall imposed by SELinux\n> / AppArmor / whatever.\n>\n> I suggest turning on the logging options in Pg that report use of tempfiles\n> and disk-spilled sorts, then have a look and see if Pg is in fact using\n> on-disk temp files for sorts or materialized data sets.\n>\n> If it is, you might find that increasing work_mem helps your query out.\n\nYeah, I'd start with EXPLAIN and then, if you can wait long enough,\nEXPLAIN ANALYZE.\n\nYou'll probably find it's doing a big sort or a big hash join.\n\n...Robert\n",
"msg_date": "Wed, 7 Apr 2010 14:39:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slow; strace output worrisome"
}
] |
[
{
"msg_contents": "I know this problem crops up all the time and I have read what I could\nfind, but I'm still not finding an answer to my problem. This is all\npostgres 8.3. Yes, I've enabled constraint_exclusion. Yes, there are\nindexes on the partitions, not just on the parent.\n\nI've got a table with 1 month partitions. As it happens, I've only\ngot 2 partitions at the moment, one with 12 million rows and the other\nwith 5 million. I only discovered all of the caveats surrounding\nindexes and partitioned tables when I executed a very simple query,\nsaw that it took far too long to run, and started looking at what the\nquery planner did. In this case, I simply want the set of distinct\nvalues for a particular column, across all partitions. The set of\ndistinct values is very small (3) and there is an index on the column,\nso I'd expect an index scan to return the 3 values almost\ninstantaneously. I turns out that when I query the partitions\ndirectly, the planner does an index scan. When I query the parent\ntable, I get full table scans instead of merged output from n index\nscans. Even worse, instead of getting the distinct values from each\npartition and merging those, it merges each partition in its entirety\nand then sorts and uniques, which is pretty much the pathological\nexecution order.\n\n I'll give the queries, then the schema, then the various explain outputs.\n\n(parent table) select distinct probe_type_num from\nday_scale_radar_performance_fact; (30 seconds)\n(partition) select distinct probe_type_num from\nday_scale_radar_performace_fact_20100301_0000; (6 seconds)\n(partition) select distinct probe_type_num from\nday_scale_radar_performance_fact_20100401_0000; (1 second)\n\n(manual union) select distinct probe_type_num from (select distinct\nprobe_type_num from day_scale_radar_performace_fact_20100301_0000\nUNION select distinct probe_type_num from\nday_scale_radar_performace_fact_20100401_0000) t2; (7 seconds)\n\nIn part, I'm surprised that the index scan takes as long as it does,\nsince I'd think an index would be able to return the set of keys\nrelatively quickly. But that's a secondary issue.\n\nParent table:\ncedexis_v2=# \\d day_scale_radar_performance_fact;\n Table \"perf_reporting.day_scale_radar_performance_fact\"\n Column | Type | Modifiers\n----------------------------+-----------------------------+-----------\n count | bigint | not null\n total_ms | bigint | not null\n time | timestamp without time zone | not null\n market_num | integer | not null\n country_num | integer | not null\n autosys_num | integer | not null\n provider_owner_zone_id | integer | not null\n provider_owner_customer_id | integer | not null\n provider_id | integer | not null\n probe_type_num | integer | not null\nIndexes:\n \"temp1_probe_type_num\" btree (probe_type_num)\n\n\npartition:\ncedexis_v2=# \\d day_scale_radar_performance_fact_20100301_0000;\nTable \"perf_reporting.day_scale_radar_performance_fact_20100301_0000\"\n Column | Type | Modifiers\n----------------------------+-----------------------------+-----------\n count | bigint | not null\n total_ms | bigint | not null\n time | timestamp without time zone | not null\n market_num | integer | not null\n country_num | integer | not null\n autosys_num | integer | not null\n provider_owner_zone_id | integer | not null\n provider_owner_customer_id | integer | not null\n provider_id | integer | not null\n probe_type_num | integer | not null\nIndexes:\n \"day_scale_radar_performance_fact_20100301_0000_asn\" btree (autosys_num)\n \"day_scale_radar_performance_fact_20100301_0000_cty\" btree (country_num)\n \"day_scale_radar_performance_fact_20100301_0000_mkt\" btree (market_num)\n \"day_scale_radar_performance_fact_20100301_0000_p\" btree (provider_id)\n \"day_scale_radar_performance_fact_20100301_0000_poc\" btree\n(provider_owner_customer_id)\n \"day_scale_radar_performance_fact_20100301_0000_poz\" btree\n(provider_owner_zone_id)\n \"day_scale_radar_performance_fact_20100301_0000_pt\" btree (probe_type_num)\n \"day_scale_radar_performance_fact_20100301_0000_time\" btree (\"time\")\nCheck constraints:\n \"day_scale_radar_performance_fact_20100301_0000_time_check\" CHECK\n(\"time\" >= '2010-03-01 00:00:00'::timestamp without time zone AND\n\"time\" < '2010-04-01 00:00:00'::timestamp without time zone)\nInherits: day_scale_radar_performance_fact\n\nI also tried creating an index on the relevant column in the parent\ntable, but it had no effect, either way. You can see it in the table\ndescription above\n\ncedexis_v2=# explain select distinct probe_type_num from\nday_scale_radar_performance_fact;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1864962.35..1926416.31 rows=200 width=4)\n -> Sort (cost=1864962.35..1895689.33 rows=12290793 width=4)\n Sort Key:\nperf_reporting.day_scale_radar_performance_fact.probe_type_num\n -> Result (cost=0.00..249616.93 rows=12290793 width=4)\n -> Append (cost=0.00..249616.93 rows=12290793 width=4)\n -> Seq Scan on day_scale_radar_performance_fact\n(cost=0.00..19.90 rows=990 width=4)\n -> Seq Scan on\nday_scale_radar_performance_fact_20100401_0000\nday_scale_radar_performance_fact (cost=0.00..31388.01 rows=1545501\nwidth=4)\n -> Seq Scan on\nday_scale_radar_performance_fact_20100301_0000\nday_scale_radar_performance_fact (cost=0.00..218209.02 rows=10744302\nwidth=4)\n\n\ncedexis_v2=# explain select distinct probe_type_num from\nday_scale_radar_performance_fact_20100301_0000;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..684328.92 rows=3 width=4)\n -> Index Scan using\nday_scale_radar_performance_fact_20100301_0000_pt on\nday_scale_radar_performance_fact_20100301_0000 (cost=0.00..657468.16\nrows=10744302 width=4)\n\n\nAnd this is a lot closer to what I would hope the query planner would do:\n\ncedexis_v2=# explain select distinct probe_type_num from (select\ndistinct probe_type_num from\nday_scale_radar_performance_fact_20100401_0000 union\nselect distinct probe_type_num from\nday_scale_radar_performance_fact_20100301_0000) t2;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=781113.73..781113.84 rows=6 width=4)\n -> Unique (cost=781113.73..781113.76 rows=6 width=4)\n -> Sort (cost=781113.73..781113.75 rows=6 width=4)\n Sort Key:\nday_scale_radar_performance_fact_20100401_0000.probe_type_num\n -> Append (cost=0.00..781113.66 rows=6 width=4)\n -> Unique (cost=0.00..96784.68 rows=3 width=4)\n -> Index Scan using\nday_scale_radar_performance_fact_20100401_0000_pt on\nday_scale_radar_performance_fact_20100401_0000 (cost=0.00..92920.93\nrows=1545501 width=4)\n -> Unique (cost=0.00..684328.92 rows=3 width=4)\n -> Index Scan using\nday_scale_radar_performance_fact_20100301_0000_pt on\nday_scale_radar_performance_fact_20100301_0000 (cost=0.00..657468.16\nrows=10744302 width=4)\n",
"msg_date": "Tue, 6 Apr 2010 14:37:14 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "indexes in partitioned tables - again"
},
{
"msg_contents": "On Tue, Apr 6, 2010 at 5:37 PM, Samuel Gendler\n<[email protected]> wrote:\n> In part, I'm surprised that the index scan takes as long as it does,\n> since I'd think an index would be able to return the set of keys\n> relatively quickly. But that's a secondary issue.\n\nWe don't actually have a facility built into the index-scan machinery\nto scan for distinct keys. It's doing a full scan of the index and\nthen unique-ifying the results afterward. It produces the right\nanswers, but it's definitely not as fast as it could be.\n\nThe planner is not as smart about partitioned tables as it could be,\neither. A scan of the partitioned tables is implemented as an append\nnode with one member per partition; and the planner isn't very good at\npushing things down through append nodes.\n\n...Robert\n",
"msg_date": "Wed, 7 Apr 2010 17:13:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: indexes in partitioned tables - again"
},
{
"msg_contents": "Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n\nI managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n\nFor some reason, every postgres backend suddenly decides (is told?) to do something. When this happens, the system become unusable for anywhere from ten seconds to a minute or so, depending on how much web traffic stacks up behind this event. We have two servers, one offline and one public, and they both do this, so it's not caused by actual web traffic (and the Apache logs don't show any HTTP activity correlated with the spikes).\n\nI thought based on other posts that this might be a background-writer problem, but it's not I/O, it's all CPU as far as I can tell.\n\nAny ideas where I can look to find what's triggering this?\n\n8 CPUs, 8 GB memory\n8-disk RAID10 (10k SATA)\nPostgres 8.3.0\nFedora 8, kernel is 2.6.24.4-64.fc8\nDiffs from original postgres.conf:\n\nmax_connections = 1000\nshared_buffers = 2000MB\nwork_mem = 256MB\nmax_fsm_pages = 16000000\nmax_fsm_relations = 625000\nsynchronous_commit = off\nwal_buffers = 256kB\ncheckpoint_segments = 30\neffective_cache_size = 4GB\nescape_string_warning = off\n\nThanks,\nCraig\n\n\ntop - 11:24:59 up 81 days, 20:27, 4 users, load average: 0.98, 0.83, 0.92\nTasks: 366 total, 20 running, 346 sleeping, 0 stopped, 0 zombie\nCpu(s): 30.6%us, 1.5%sy, 0.0%ni, 66.3%id, 1.5%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 8194800k total, 8118688k used, 76112k free, 36k buffers\nSwap: 2031608k total, 169348k used, 1862260k free, 7313232k cached\n\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n18972 postgres 20 0 2514m 11m 8752 R 11 0.1 0:00.35 postmaster\n10618 postgres 20 0 2514m 12m 9456 R 9 0.2 0:00.54 postmaster\n10636 postgres 20 0 2514m 11m 9192 R 9 0.1 0:00.45 postmaster\n25903 postgres 20 0 2514m 11m 8784 R 9 0.1 0:00.21 postmaster\n10626 postgres 20 0 2514m 11m 8716 R 6 0.1 0:00.45 postmaster\n10645 postgres 20 0 2514m 12m 9352 R 6 0.2 0:00.42 postmaster\n10647 postgres 20 0 2514m 11m 9172 R 6 0.1 0:00.51 postmaster\n18502 postgres 20 0 2514m 11m 9016 R 6 0.1 0:00.23 postmaster\n10641 postgres 20 0 2514m 12m 9296 R 5 0.2 0:00.36 postmaster\n10051 postgres 20 0 2514m 13m 10m R 4 0.2 0:00.70 postmaster\n10622 postgres 20 0 2514m 12m 9216 R 4 0.2 0:00.39 postmaster\n10640 postgres 20 0 2514m 11m 8592 R 4 0.1 0:00.52 postmaster\n18497 postgres 20 0 2514m 11m 8804 R 4 0.1 0:00.25 postmaster\n18498 postgres 20 0 2514m 11m 8804 R 4 0.1 0:00.22 postmaster\n10341 postgres 20 0 2514m 13m 9m R 2 0.2 0:00.57 postmaster\n10619 postgres 20 0 2514m 12m 9336 R 1 0.2 0:00.38 postmaster\n15687 postgres 20 0 2321m 35m 35m R 0 0.4 8:36.12 postmaster\n\n\n",
"msg_date": "Wed, 07 Apr 2010 14:37:22 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:\n> Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n> \n> I managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n\nWhat does iostat 5 say during the jump?\n\nJoshua D. Drake\n\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nRespect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.\n\n",
"msg_date": "Wed, 07 Apr 2010 14:40:10 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On 4/7/10 2:40 PM, Joshua D. Drake wrote:\n> On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:\n>> Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n>>\n>> I managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n>\n> What does iostat 5 say during the jump?\n\nIt's very hard to say ... I'll have to start a background job to watch for a day or so. While it's happening, you can't login, and any open windows become unresponsive. I'll probably have to run it at high priority using nice(1) to get any data at all during the event.\n\nWould vmstat be informative?\n\nThanks,\nCraig\n",
"msg_date": "Wed, 07 Apr 2010 14:45:02 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James wrote:\n> I managed to capture one such event using top(1) with the \"batch\" \n> option as a background process. See output below\n\nYou should add \"-c\" to your batch top capture, then you'll be able to \nsee what the individual postmaster processes are actually doing when \nthings get stuck.\n\n> max_connections = 1000\n> shared_buffers = 2000MB\n> work_mem = 256MB\n> Mem: 8194800k total, 8118688k used, 76112k free, 36k buffers\n> Swap: 2031608k total, 169348k used, 1862260k free, 7313232k cached\n\nThese settings appear way too high for a server with 8GB of RAM. I'm \nnot sure if max_connections is too large, or if it's work_mem that's too \nbig, but one or both of them may need to be tuned way down from where \nthey are now to get your memory usage under control. Your server might \nrunning out of RAM during the periods where it becomes \nunresponsive--that could be the system paging stuff out to swap, which \nisn't necessarily a high user of I/O but it will block things. Not \nhaving any memory used for buffers is never a good sign.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 07 Apr 2010 17:54:12 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n> I managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n\n> Any ideas where I can look to find what's triggering this?\n\n> Postgres 8.3.0\n ^^^^^\n\nIf it's really 8.3.0, try updating to 8.3.something-recent. We've fixed\na whole lot of bugs since then.\n\nI have a suspicion that this might be an sinval overrun scenario, in\nwhich case you'd need to update to 8.4 to get a real fix. But updating\nin the 8.3 branch would be cheap and easy.\n\nIf it is sinval overrun, it would presumably be triggered by a whole lot\nof catalog changes being made at approximately the same time. Can you\ncorrelate the spikes with anything like that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Apr 2010 17:59:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load "
},
{
"msg_contents": "On Wed, 2010-04-07 at 14:45 -0700, Craig James wrote:\n> On 4/7/10 2:40 PM, Joshua D. Drake wrote:\n> > On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:\n> >> Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n> >>\n> >> I managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n> >\n> > What does iostat 5 say during the jump?\n> \n> It's very hard to say ... I'll have to start a background job to watch for a day or so. While it's happening, you can't login, and any open windows become unresponsive. I'll probably have to run it at high priority using nice(1) to get any data at all during the event.\n\nDo you have sar runing? Say a sar -A ?\n\n> \n> Would vmstat be informative?\n\nYes.\n\nMy guess is that it is not CPU, it is IO and your CPU usage is all WAIT\non IO.\n\nTo have your CPUs so flooded that they are the cause of an inability to\nlog in is pretty suspect.\n\nJoshua D. Drake\n\n\n> \n> Thanks,\n> Craig\n> \n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nRespect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.\n\n",
"msg_date": "Wed, 07 Apr 2010 15:36:48 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Wed, Apr 7, 2010 at 2:37 PM, Craig James <[email protected]> wrote:\n> Most of the time Postgres runs nicely, but two or three times a day we get a\n> huge spike in the CPU load that lasts just a short time -- it jumps to 10-20\n> CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike\n> events. During these spikes, the system is completely unresponsive (you\n> can't even login via ssh).\n\nYou need to find out what all those Postgres processes are doing. You\nmight try enabling update_process_title and then using ps to figure\nout what each instance is using. Otherwise, you might try enabling\nlogging of commands that take a certain amount of time to run (see\nlog_min_duration_statement).\n\n> I managed to capture one such event using top(1) with the \"batch\" option as\n> a background process. See output below - it shows 19 active postgress\n> processes, but I think it missed the bulk of the spike.\n\nLooks like it. The system doesn't appear to be overloaded at all at that point.\n\n> 8 CPUs, 8 GB memory\n> 8-disk RAID10 (10k SATA)\n> Postgres 8.3.0\n\nShould definitely update to the latest 8.3.10 - 8.3 has a LOT of known bugs.\n\n> Fedora 8, kernel is 2.6.24.4-64.fc8\n\nWow, that is very old, too.\n\n> Diffs from original postgres.conf:\n>\n> max_connections = 1000\n> shared_buffers = 2000MB\n> work_mem = 256MB\n\nwork_mem is way too high for 1000 connections and 8GB ram. You could\nsimply be starting up too many postgres processes and overwhelming the\nmachine. Either significantly reduce max_connections or work_mem.\n\n> max_fsm_pages = 16000000\n> max_fsm_relations = 625000\n> synchronous_commit = off\n\nYou are playing with fire here. You should never turn this off unless\nyou do not care if your data becomes irrecoverably corrupted.\n\n> top - 11:24:59 up 81 days, 20:27, 4 users, load average: 0.98, 0.83, 0.92\n> Tasks: 366 total, 20 running, 346 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 30.6%us, 1.5%sy, 0.0%ni, 66.3%id, 1.5%wa, 0.0%hi, 0.0%si,\n> 0.0%st\n> Mem: 8194800k total, 8118688k used, 76112k free, 36k buffers\n> Swap: 2031608k total, 169348k used, 1862260k free, 7313232k cached\n\nSystem load looks very much OK given that you have 8 CPUs.\n\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 18972 postgres 20 0 2514m 11m 8752 R 11 0.1 0:00.35 postmaster\n> 10618 postgres 20 0 2514m 12m 9456 R 9 0.2 0:00.54 postmaster\n> 10636 postgres 20 0 2514m 11m 9192 R 9 0.1 0:00.45 postmaster\n> 25903 postgres 20 0 2514m 11m 8784 R 9 0.1 0:00.21 postmaster\n> 10626 postgres 20 0 2514m 11m 8716 R 6 0.1 0:00.45 postmaster\n> 10645 postgres 20 0 2514m 12m 9352 R 6 0.2 0:00.42 postmaster\n> 10647 postgres 20 0 2514m 11m 9172 R 6 0.1 0:00.51 postmaster\n> 18502 postgres 20 0 2514m 11m 9016 R 6 0.1 0:00.23 postmaster\n> 10641 postgres 20 0 2514m 12m 9296 R 5 0.2 0:00.36 postmaster\n> 10051 postgres 20 0 2514m 13m 10m R 4 0.2 0:00.70 postmaster\n> 10622 postgres 20 0 2514m 12m 9216 R 4 0.2 0:00.39 postmaster\n> 10640 postgres 20 0 2514m 11m 8592 R 4 0.1 0:00.52 postmaster\n> 18497 postgres 20 0 2514m 11m 8804 R 4 0.1 0:00.25 postmaster\n> 18498 postgres 20 0 2514m 11m 8804 R 4 0.1 0:00.22 postmaster\n> 10341 postgres 20 0 2514m 13m 9m R 2 0.2 0:00.57 postmaster\n> 10619 postgres 20 0 2514m 12m 9336 R 1 0.2 0:00.38 postmaster\n> 15687 postgres 20 0 2321m 35m 35m R 0 0.4 8:36.12 postmaster\n\nJudging by the amount of CPU time each postmaster as accumulated, they\nare all fairly new processes. How many pg proceses of the ~350\ncurrently running are there?\n\n-Dave\n",
"msg_date": "Wed, 7 Apr 2010 15:56:11 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On 4/7/10 3:36 PM, Joshua D. Drake wrote:\n> On Wed, 2010-04-07 at 14:45 -0700, Craig James wrote:\n>> On 4/7/10 2:40 PM, Joshua D. Drake wrote:\n>>> On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:\n>>>> Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n>>>>\n>>>> I managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n>>>\n>>> What does iostat 5 say during the jump?\n>>\n>> It's very hard to say ... I'll have to start a background job to watch for a day or so. While it's happening, you can't login, and any open windows become unresponsive. I'll probably have to run it at high priority using nice(1) to get any data at all during the event.\n>\n> Do you have sar runing? Say a sar -A ?\n\nNo, I don't have it installed. I'll have a look. At first glance it looks like a combination of what I can get with \"top -b\" and vmstat, but with a single program.\n\n> My guess is that it is not CPU, it is IO and your CPU usage is all WAIT\n> on IO.\n>\n> To have your CPUs so flooded that they are the cause of an inability to\n> log in is pretty suspect.\n\nI thought so too, except that I can't login during the flood. If the CPUs were all doing iowaits, logging in should be easy.\n\nGreg's suggestion that shared_buffers and work_mem are too big for an 8 GB system fits these symptoms -- if it's having a swap storm, login is effectively impossible.\n\nCraig\n\n>\n> Joshua D. Drake\n>\n>\n>>\n>> Thanks,\n>> Craig\n>>\n>\n>\n\n",
"msg_date": "Wed, 07 Apr 2010 15:57:25 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On 4/7/10 2:59 PM, Tom Lane wrote:\n> Craig James<[email protected]> writes:\n>> Most of the time Postgres runs nicely, but two or three times a day we get a huge spike in the CPU load that lasts just a short time -- it jumps to 10-20 CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike events. During these spikes, the system is completely unresponsive (you can't even login via ssh).\n>> I managed to capture one such event using top(1) with the \"batch\" option as a background process. See output below - it shows 19 active postgress processes, but I think it missed the bulk of the spike.\n>\n>> Any ideas where I can look to find what's triggering this?\n>\n>> Postgres 8.3.0\n> ^^^^^\n>\n> If it's really 8.3.0, try updating to 8.3.something-recent. We've fixed\n> a whole lot of bugs since then.\n\nGood advice, I've been meaning to do this, maybe this will be a kick in the pants to motivate me.\n\n> I have a suspicion that this might be an sinval overrun scenario, in\n> which case you'd need to update to 8.4 to get a real fix. But updating\n> in the 8.3 branch would be cheap and easy.\n>\n> If it is sinval overrun, it would presumably be triggered by a whole lot\n> of catalog changes being made at approximately the same time. Can you\n> correlate the spikes with anything like that?\n\nNot that I know of. Just regular web traffic. On the backup server these events happen occasionally even when there is little or no web traffic, and nobody logged in doing maintenance.\n\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Wed, 07 Apr 2010 16:07:31 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "\n>> ...Can you\n>> correlate the spikes with anything like that?\n>\n> Not that I know of. Just regular web traffic. On the backup server \n> these events happen occasionally even when there is little or no web \n> traffic, and nobody logged in doing maintenance.\nWhat, if anything, are you logging in the PostgreSQL logs? Anything \ninteresting, there?\n\nCheers,\nSteve\n\n",
"msg_date": "Wed, 07 Apr 2010 16:14:01 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Wednesday 07 April 2010, Craig James <[email protected]> wrote:\n> I thought so too, except that I can't login during the flood. If the\n> CPUs were all doing iowaits, logging in should be easy.\n\nBusying out the drives is about the most reliable way to make logging in \nvery slow (especially, but not only, if it's due to swapping).\n",
"msg_date": "Wed, 7 Apr 2010 16:38:27 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Wed, Apr 7, 2010 at 3:57 PM, Craig James <[email protected]> wrote:\n> On 4/7/10 3:36 PM, Joshua D. Drake wrote:\n>> My guess is that it is not CPU, it is IO and your CPU usage is all WAIT\n>> on IO.\n>>\n>> To have your CPUs so flooded that they are the cause of an inability to\n>> log in is pretty suspect.\n>\n> I thought so too, except that I can't login during the flood. If the CPUs\n> were all doing iowaits, logging in should be easy.\n\nNo - logging in with high iowait is very harder to do than high CPU\ntime because of latency of disk access.\n\n> Greg's suggestion that shared_buffers and work_mem are too big for an 8 GB\n> system fits these symptoms -- if it's having a swap storm, login is\n> effectively impossible.\n\nA swap storm effectively puts the machine into very high iowait time.\n\n-Dave\n",
"msg_date": "Wed, 7 Apr 2010 16:52:47 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> On 4/7/10 3:36 PM, Joshua D. Drake wrote:\n>> To have your CPUs so flooded that they are the cause of an inability to\n>> log in is pretty suspect.\n\n> I thought so too, except that I can't login during the flood. If the CPUs were all doing iowaits, logging in should be easy.\n\n> Greg's suggestion that shared_buffers and work_mem are too big for an 8 GB system fits these symptoms -- if it's having a swap storm, login is effectively impossible.\n\nYeah, but there is also the question of what's causing all the backends\nto try to run at the same time. Oversubscribed memory could well be the\ndirect cause of the machine getting driven into the ground, but there's\nsomething else going on here too IMO.\n\nAnyway I concur with the advice to lower shared_buffers, and run fewer\nbackends if possible, to see if that ameliorates the problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Apr 2010 20:18:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load "
},
{
"msg_contents": "On Wed, Apr 7, 2010 at 6:56 PM, David Rees <[email protected]> wrote:\n>> max_fsm_pages = 16000000\n>> max_fsm_relations = 625000\n>> synchronous_commit = off\n>\n> You are playing with fire here. You should never turn this off unless\n> you do not care if your data becomes irrecoverably corrupted.\n\nThat is not correct. Turning off synchronous_commit is sensible if\nyou don't mind losing the last few transactions on a crash. What will\ncorrupt your database is if you turn off fsync.\n\n...Robert\n",
"msg_date": "Wed, 7 Apr 2010 20:47:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On 4/7/10 5:47 PM, Robert Haas wrote:\n> On Wed, Apr 7, 2010 at 6:56 PM, David Rees<[email protected]> wrote:\n>>> max_fsm_pages = 16000000\n>>> max_fsm_relations = 625000\n>>> synchronous_commit = off\n>>\n>> You are playing with fire here. You should never turn this off unless\n>> you do not care if your data becomes irrecoverably corrupted.\n>\n> That is not correct. Turning off synchronous_commit is sensible if\n> you don't mind losing the last few transactions on a crash. What will\n> corrupt your database is if you turn off fsync.\n\nA bit off the original topic, but ...\n\nI set it this way because I was advised that with a battery-backed RAID controller, this was a safe setting. Is that not the case?\n\nCraig\n",
"msg_date": "Wed, 07 Apr 2010 19:06:15 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Wed, Apr 7, 2010 at 7:06 PM, Craig James <[email protected]> wrote:\n> On 4/7/10 5:47 PM, Robert Haas wrote:\n>> On Wed, Apr 7, 2010 at 6:56 PM, David Rees<[email protected]> wrote:\n>>>> synchronous_commit = off\n>>>\n>>> You are playing with fire here. You should never turn this off unless\n>>> you do not care if your data becomes irrecoverably corrupted.\n>>\n>> That is not correct. Turning off synchronous_commit is sensible if\n>> you don't mind losing the last few transactions on a crash. What will\n>> corrupt your database is if you turn off fsync.\n\nWhoops, you're right.\n\n> A bit off the original topic, but ...\n>\n> I set it this way because I was advised that with a battery-backed RAID\n> controller, this was a safe setting. Is that not the case?\n\nRobert has it right - with synchronous_commit off, your database will\nalways be consistent, but you may lose transactions in the event of a\ncrash.\n\nDoesn't matter if you have a BBU or not - all the BBU does is give the\ncontroller the ability to acknowledge a write without the data\nactually having been written to disk.\n\nAccording to the documentation, with synchronous_commit off and a\ndefault wal_writer_delay of 200ms, it's possible to lose up to a\nmaximum of 600ms of data you thought were written to disk.\n\n-Dave\n",
"msg_date": "Wed, 7 Apr 2010 19:50:35 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Wed, Apr 7, 2010 at 10:50 PM, David Rees <[email protected]> wrote:\n> On Wed, Apr 7, 2010 at 7:06 PM, Craig James <[email protected]> wrote:\n>> On 4/7/10 5:47 PM, Robert Haas wrote:\n>>> On Wed, Apr 7, 2010 at 6:56 PM, David Rees<[email protected]> wrote:\n>>>>> synchronous_commit = off\n>>>>\n>>>> You are playing with fire here. You should never turn this off unless\n>>>> you do not care if your data becomes irrecoverably corrupted.\n>>>\n>>> That is not correct. Turning off synchronous_commit is sensible if\n>>> you don't mind losing the last few transactions on a crash. What will\n>>> corrupt your database is if you turn off fsync.\n>\n> Whoops, you're right.\n>\n>> A bit off the original topic, but ...\n>>\n>> I set it this way because I was advised that with a battery-backed RAID\n>> controller, this was a safe setting. Is that not the case?\n>\n> Robert has it right - with synchronous_commit off, your database will\n> always be consistent, but you may lose transactions in the event of a\n> crash.\n>\n> Doesn't matter if you have a BBU or not - all the BBU does is give the\n> controller the ability to acknowledge a write without the data\n> actually having been written to disk.\n>\n> According to the documentation, with synchronous_commit off and a\n> default wal_writer_delay of 200ms, it's possible to lose up to a\n> maximum of 600ms of data you thought were written to disk.\n\nSo, IOW, if you're running a social networking web site and your\ndatabase is full of status updates sent by teenagers to other\nteenagers, you might judge that turning off synchronous_commit is a\nreasonable thing to do, if you need the performance. If you're\nrunning a bank and your database is full of information on wire\ntransfers sent and received, not so much.\n\n...Robert\n",
"msg_date": "Wed, 7 Apr 2010 23:07:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "David Rees wrote:\n> You need to find out what all those Postgres processes are doing. You\n> might try enabling update_process_title and then using ps to figure\n> out what each instance is using.\n\nThat's what the addition of \"-c\" to top I suggested does on Linux; it \nshows the updated process titles where the command line is in the \ndefault config.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 08 Apr 2010 02:08:05 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James wrote:\n> On 4/7/10 5:47 PM, Robert Haas wrote:\n> > On Wed, Apr 7, 2010 at 6:56 PM, David Rees<[email protected]> wrote:\n> >>> max_fsm_pages = 16000000\n> >>> max_fsm_relations = 625000\n> >>> synchronous_commit = off\n> >>\n> >> You are playing with fire here. You should never turn this off unless\n> >> you do not care if your data becomes irrecoverably corrupted.\n> >\n> > That is not correct. Turning off synchronous_commit is sensible if\n> > you don't mind losing the last few transactions on a crash. What will\n> > corrupt your database is if you turn off fsync.\n> \n> A bit off the original topic, but ...\n> \n> I set it this way because I was advised that with a battery-backed\n> RAID controller, this was a safe setting. Is that not the case?\n\nTo get good performance, you can either get a battery-backed RAID\ncontroller or risk losing a few transaction with synchronous_commit =\noff. If you already have a battery-backed RAID controller, there is\nlittle benefit to turning synchronous_commit off, and some major\ndownsides (possible data loss).\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n",
"msg_date": "Wed, 14 Apr 2010 17:58:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "I'm reviving this question because I never figured it out. To summarize: At random intervals anywhere from a few times per hour to once or twice a day, we see a huge spike in CPU load that essentially brings the system to a halt for up to a minute or two. Previous answers focused on \"what is it doing\", i.e. is it really Postgres or something else?\n\nNow the question has narrowed down to this: what could trigger EVERY postgres backend to do something at the same time? See the attached output from \"top -b\", which shows what is happening during one of the CPU spikes.\n\nA little background about our system. We have roughly 100 FastCGI clients connected at all times that are called on to generate images from data in the database. Even though there are a lot of these, they don't do much. They sit there most of the time, then they spew out a couple dozen GIF images in about one second as a user gets a new page of data. Each GIF image requires fetching a single row using a single indexed column, so it's a trival amount of work for Postgres.\n\nWe also have the \"heavy lift\" application that does the search. Typically one or two of these is running at a time, and takes from a fraction of a second to a few minutes to complete. In this particular instance, immediately before this spike, the CPU load was only at about 10% -- a couple users poking around with easy queries.\n\nSo what is it that will cause every single Postgres backend to come to life at the same moment, when there's no real load on the server? Maybe if a backend crashes? Some other problem?\n\nThere's nothing in the serverlog.\n\nThanks,\nCraig\n\n\ntop - 12:15:09 up 81 days, 21:18, 4 users, load average: 0.38, 0.38, 0.73\nTasks: 374 total, 95 running, 279 sleeping, 0 stopped, 0 zombie\nCpu(s): 62.5%us, 2.2%sy, 0.0%ni, 34.9%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st\nMem: 8194800k total, 7948928k used, 245872k free, 36k buffers\nSwap: 2031608k total, 161136k used, 1870472k free, 7129744k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n22120 postgres 20 0 2514m 17m 13m R 11 0.2 0:01.02 postmaster\n18497 postgres 20 0 2514m 11m 8832 R 6 0.1 0:00.62 postmaster\n22962 postgres 20 0 2514m 12m 9548 R 6 0.2 0:00.22 postmaster\n24002 postgres 20 0 2514m 11m 8804 R 6 0.1 0:00.15 postmaster\n25900 postgres 20 0 2514m 11m 8824 R 6 0.1 0:00.55 postmaster\n 8941 postgres 20 0 2324m 6172 4676 R 5 0.1 0:00.32 postmaster\n10622 postgres 20 0 2514m 12m 9444 R 5 0.2 0:00.79 postmaster\n14021 postgres 20 0 2514m 11m 8548 R 5 0.1 0:00.28 postmaster\n14075 postgres 20 0 2514m 11m 8672 R 5 0.1 0:00.27 postmaster\n14423 postgres 20 0 2514m 11m 8572 R 5 0.1 0:00.29 postmaster\n18896 postgres 20 0 2324m 5644 4204 R 5 0.1 0:00.11 postmaster\n18897 postgres 20 0 2514m 12m 9800 R 5 0.2 0:00.27 postmaster\n18928 postgres 20 0 2514m 11m 8792 R 5 0.1 0:00.18 postmaster\n18973 postgres 20 0 2514m 11m 8792 R 5 0.1 0:00.70 postmaster\n22049 postgres 20 0 2514m 17m 14m R 5 0.2 0:01.11 postmaster\n22050 postgres 20 0 2514m 16m 13m R 5 0.2 0:01.06 postmaster\n22843 postgres 20 0 2514m 12m 9328 R 5 0.2 0:00.20 postmaster\n24202 postgres 20 0 2324m 5560 4120 R 5 0.1 0:00.07 postmaster\n24388 postgres 20 0 2514m 12m 9380 R 5 0.2 0:00.16 postmaster\n25903 postgres 20 0 2514m 11m 8828 R 5 0.1 0:00.55 postmaster\n28362 postgres 20 0 2514m 11m 8952 R 5 0.1 0:00.48 postmaster\n 5667 postgres 20 0 2324m 6752 5588 R 4 0.1 0:08.93 postmaster\n 7531 postgres 20 0 2324m 5452 4008 R 4 0.1 0:03.21 postmaster\n 9219 postgres 20 0 2514m 11m 8476 R 4 0.1 0:00.89 postmaster\n 9820 postgres 20 0 2514m 12m 9.9m R 4 0.2 0:00.92 postmaster\n10050 postgres 20 0 2324m 6172 4676 R 4 0.1 0:00.31 postmaster\n10645 postgres 20 0 2514m 12m 9512 R 4 0.2 0:00.72 postmaster\n14582 postgres 20 0 2514m 25m 21m R 4 0.3 0:02.10 postmaster\n18502 postgres 20 0 2514m 11m 9040 R 4 0.1 0:00.64 postmaster\n18972 postgres 20 0 2514m 11m 8792 R 4 0.1 0:00.76 postmaster\n18975 postgres 20 0 2514m 11m 8904 R 4 0.1 0:00.63 postmaster\n19496 postgres 20 0 2514m 14m 11m R 4 0.2 0:00.44 postmaster\n22121 postgres 20 0 2514m 16m 13m R 4 0.2 0:00.81 postmaster\n24340 postgres 20 0 2514m 12m 9424 R 4 0.2 0:00.15 postmaster\n24483 postgres 20 0 2324m 6008 4536 R 4 0.1 0:00.21 postmaster\n25668 postgres 20 0 2514m 16m 13m R 4 0.2 0:00.91 postmaster\n26382 postgres 20 0 2514m 11m 8996 R 4 0.1 0:00.50 postmaster\n28363 postgres 20 0 2514m 11m 8908 R 4 0.1 0:00.34 postmaster\n 9754 postgres 20 0 2514m 11m 8752 R 3 0.1 0:00.29 postmaster\n16113 postgres 20 0 2514m 17m 14m R 3 0.2 0:01.10 postmaster\n18498 postgres 20 0 2514m 11m 8844 R 3 0.1 0:00.63 postmaster\n18500 postgres 20 0 2514m 11m 8812 R 3 0.1 0:00.66 postmaster\n22116 postgres 20 0 2514m 17m 13m R 3 0.2 0:01.05 postmaster\n22287 postgres 20 0 2324m 6072 4596 R 3 0.1 0:00.24 postmaster\n22425 postgres 20 0 2514m 17m 14m R 3 0.2 0:01.02 postmaster\n22827 postgres 20 0 2514m 13m 10m R 3 0.2 0:00.43 postmaster\n23285 postgres 20 0 2514m 13m 10m R 3 0.2 0:00.40 postmaster\n24384 postgres 20 0 2514m 12m 9300 R 3 0.2 0:00.15 postmaster\n30501 postgres 20 0 2514m 11m 9012 R 3 0.1 0:00.47 postmaster\n 5665 postgres 20 0 2324m 6528 5396 R 2 0.1 0:08.71 postmaster\n 5671 postgres 20 0 2324m 6720 5596 R 2 0.1 0:08.73 postmaster\n 7428 postgres 20 0 2324m 6176 4928 R 2 0.1 0:07.37 postmaster\n 7431 postgres 20 0 2324m 6140 4920 R 2 0.1 0:07.40 postmaster\n 7433 postgres 20 0 2324m 6372 4924 R 2 0.1 0:07.29 postmaster\n 7525 postgres 20 0 2324m 5468 4024 R 2 0.1 0:03.36 postmaster\n 7530 postgres 20 0 2324m 5452 4008 R 2 0.1 0:03.40 postmaster\n 7532 postgres 20 0 2324m 5440 3996 R 2 0.1 0:03.23 postmaster\n 7533 postgres 20 0 2324m 5484 4040 R 2 0.1 0:03.25 postmaster\n 8944 postgres 20 0 2514m 26m 23m R 2 0.3 0:02.16 postmaster\n 8946 postgres 20 0 2514m 26m 22m R 2 0.3 0:02.06 postmaster\n 9821 postgres 20 0 2514m 12m 9948 R 2 0.2 0:00.93 postmaster\n10051 postgres 20 0 2514m 13m 10m R 2 0.2 0:01.03 postmaster\n10226 postgres 20 0 2514m 27m 23m R 2 0.3 0:02.24 postmaster\n10626 postgres 20 0 2514m 12m 9212 R 2 0.1 0:00.83 postmaster\n14580 postgres 20 0 2324m 6120 4632 R 2 0.1 0:00.27 postmaster\n16112 postgres 20 0 2514m 18m 14m R 2 0.2 0:01.18 postmaster\n19450 postgres 20 0 2324m 6108 4620 R 2 0.1 0:00.22 postmaster\n22289 postgres 20 0 2514m 22m 19m R 2 0.3 0:01.66 postmaster\n 5663 postgres 20 0 2324m 6700 5576 R 1 0.1 0:08.23 postmaster\n 7526 postgres 20 0 2324m 5444 4000 R 1 0.1 0:03.44 postmaster\n 7528 postgres 20 0 2324m 5444 4000 R 1 0.1 0:03.44 postmaster\n 7529 postgres 20 0 2324m 5420 3976 R 1 0.1 0:03.04 postmaster\n 8888 postgres 20 0 2514m 25m 22m R 1 0.3 0:02.01 postmaster\n 9622 postgres 20 0 2514m 13m 10m R 1 0.2 0:01.08 postmaster\n 9625 postgres 20 0 2514m 13m 10m R 1 0.2 0:01.00 postmaster\n14686 postgres 20 0 2324m 6116 4628 R 1 0.1 0:00.30 postmaster\n14687 postgres 20 0 2514m 24m 21m R 1 0.3 0:01.95 postmaster\n16111 postgres 20 0 2514m 17m 14m R 1 0.2 0:01.01 postmaster\n16854 postgres 20 0 2324m 5468 4024 R 1 0.1 0:03.31 postmaster\n 5664 postgres 20 0 2324m 6740 5584 R 0 0.1 0:08.45 postmaster\n 5666 postgres 20 0 2324m 6744 5584 R 0 0.1 0:08.70 postmaster\n 5668 postgres 20 0 2324m 6720 5588 R 0 0.1 0:08.58 postmaster\n 5670 postgres 20 0 2324m 6748 5584 R 0 0.1 0:08.99 postmaster\n 5672 postgres 20 0 2324m 6764 5596 R 0 0.1 0:08.30 postmaster\n 7429 postgres 20 0 2324m 6000 4760 R 0 0.1 0:07.41 postmaster\n 7430 postgres 20 0 2324m 6080 4928 R 0 0.1 0:07.09 postmaster\n 7463 postgres 20 0 2324m 6412 4928 R 0 0.1 0:07.14 postmaster\n 7538 postgres 20 0 2324m 5472 4028 R 0 0.1 0:03.42 postmaster\n 8887 postgres 20 0 2324m 6184 4680 R 0 0.1 0:00.23 postmaster\n 8942 postgres 20 0 2514m 26m 22m R 0 0.3 0:01.97 postmaster\n10636 postgres 20 0 2514m 12m 9380 R 0 0.2 0:00.75 postmaster\n10640 postgres 20 0 2514m 11m 9148 R 0 0.1 0:00.75 postmaster\n15687 postgres 20 0 2321m 35m 35m R 0 0.4 8:38.38 postmaster\n\n",
"msg_date": "Thu, 24 Jun 2010 17:50:26 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On Thu, 2010-06-24 at 17:50 -0700, Craig James wrote:\n> I'm reviving this question because I never figured it out. To summarize: At random intervals anywhere from a few times per hour to once or twice a day, we see a huge spike in CPU load that essentially brings the system to a halt for up to a minute or two. Previous answers focused on \"what is it doing\", i.e. is it really Postgres or something else?\n> \n> Now the question has narrowed down to this: what could trigger EVERY postgres backend to do something at the same time? See the attached output from \"top -b\", which shows what is happening during one of the CPU spikes.\n\ncheckpoint causing IO Wait.\n\nWhat does sar say about these times?\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\n\n",
"msg_date": "Thu, 24 Jun 2010 18:01:40 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James wrote:\n> Now the question has narrowed down to this: what could trigger EVERY \n> postgres backend to do something at the same time? See the attached \n> output from \"top -b\", which shows what is happening during one of the \n> CPU spikes.\n\nBy the way: you probably want \"top -b -c\", which will actually show you \nwhat each client is doing via inspecting what it's set its command line to.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 24 Jun 2010 21:13:39 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> So what is it that will cause every single Postgres backend to come to life at the same moment, when there's no real load on the server? Maybe if a backend crashes? Some other problem?\n\nsinval queue overflow comes to mind ... although that really shouldn't\nhappen if there's \"no real load\" on the server. What PG version is\nthis? Also, the pg_stat_activity view contents when this happens would\nprobably be more useful to look at than \"top\" output.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Jun 2010 00:04:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load "
},
{
"msg_contents": "On 6/24/10 9:04 PM, Tom Lane wrote:\n> Craig James<[email protected]> writes:\n>> So what is it that will cause every single Postgres backend to come to life at the same moment, when there's no real load on the server? Maybe if a backend crashes? Some other problem?\n>\n> sinval queue overflow comes to mind ... although that really shouldn't\n> happen if there's \"no real load\" on the server. What PG version is\n> this?\n\n8.3.10. Upgraded based on your advice when I first asked this question.\n\n> Also, the pg_stat_activity view contents when this happens would\n> probably be more useful to look at than \"top\" output.\n\nI'll try. It's hard to discover anything because the whole machine is overwhelmed when this happens. The only way I got the top(1) output was by running it high priority as root using nice(1). I can't do that with a Postgres backend, but I'll see what I can do.\n\nCraig\n",
"msg_date": "Fri, 25 Jun 2010 06:48:43 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> On 6/24/10 9:04 PM, Tom Lane wrote:\n>> sinval queue overflow comes to mind ... although that really shouldn't\n>> happen if there's \"no real load\" on the server. What PG version is\n>> this?\n\n> 8.3.10. Upgraded based on your advice when I first asked this question.\n\nAny chance of going to 8.4? If this is what I suspect, you really need\nthis 8.4 fix:\nhttp://archives.postgresql.org/pgsql-committers/2008-06/msg00227.php\nwhich eliminated the thundering-herd behavior that previous releases\nexhibit when the sinval queue overflows.\n\nIf you're stuck on 8.3 then you are going to have to modify your\napplication's behavior to eliminate sinval overflows. If the overall\nsystem load isn't high then I would have to guess that the problem is\nsome individual sessions sitting \"idle in transaction\" for long periods,\nlong enough that a number of DDL operations happen elsewhere.\n\nYou could also consider throwing memory at the problem by raising the\nsinval queue size. That'd require a custom build since it's not exposed\nas a configurable parameter, but it'd be a one-line patch I think.\n\nOr you could look at using connection pooling so you don't have quite\nso many backends ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Jun 2010 10:47:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load "
},
{
"msg_contents": "Dear Criag,\n\nalso check for the possibility of installing sysstat in our system.\nit goes a long way in collecting the system stats. you may\nconsider increasing the frequency of data collection by\nchanging the interval of cron job manually in /etc/cron.d/\nnormally its */10 , you may make it */2 for time being.\nthe software automatically maintains historical records\nof data for 1 month.\n\nDear Criag,also check for the possibility of installing sysstat in our system.it goes a long way in collecting the system stats. you mayconsider increasing the frequency of data collection bychanging the interval of cron job manually in /etc/cron.d/\nnormally its */10 , you may make it */2 for time being.the software automatically maintains historical recordsof data for 1 month.",
"msg_date": "Fri, 25 Jun 2010 21:37:03 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On 6/25/10 7:47 AM, Tom Lane wrote:\n> Craig James<[email protected]> writes:\n>> On 6/24/10 9:04 PM, Tom Lane wrote:\n>>> sinval queue overflow comes to mind ... although that really shouldn't\n>>> happen if there's \"no real load\" on the server. What PG version is\n>>> this?\n>\n>> 8.3.10. Upgraded based on your advice when I first asked this question.\n>\n> Any chance of going to 8.4? If this is what I suspect, you really need\n> this 8.4 fix:\n> http://archives.postgresql.org/pgsql-committers/2008-06/msg00227.php\n> which eliminated the thundering-herd behavior that previous releases\n> exhibit when the sinval queue overflows.\n\nYes, there is a chance of upgrading to 8.4.4. I just bought a new server and it has 8.4.4 on it, but it won't be online for a while so I can't compare yet. This may motivate me to upgrade the current servers to 8.4.4 too. I was pleased to see that 8.4 has a new upgrade-in-place feature that means we don't have to dump/restore. That really helps a lot.\n\nA question about 8.4.4: I've been having problems with bloat. I thought I'd adjusted the FSM parameters correctly based on advice I got here, but apparently not. 8.4.4 has removed the configurable FSM parameters completely, which is very cool. But ... if I upgrade a bloated database using the upgrade-in-place feature, will 8.4.4 recover the bloat and return it to the OS, or do I still have to recover the space manually (like vacuum-full/reindex, or cluster, or copy/drop a table)?\n\n> Or you could look at using connection pooling so you don't have quite\n> so many backends ...\n\nI always just assumed that lots of backends that would be harmless if each one was doing very little. If I understand your explanation, it sounds like that's not entirely true in pre-8.4.4 releases due to the sinval queue problems.\n\nThanks,\nCraig\n",
"msg_date": "Fri, 25 Jun 2010 09:16:57 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James <[email protected]> wrote:\n \n> I always just assumed that lots of backends that would be harmless\n> if each one was doing very little.\n \nEven if each is doing very little, if a large number of them happen\nto make a request at the same time, you can have problems. This is\nexactly where a connection pool can massively improve both\nthroughput and response time. If you can arrange it, you want a\nconnection pool which will put a limit on active database\ntransactions and queue requests to start a new transaction until one\nof the pending ones finishes.\n \n-Kevin\n",
"msg_date": "Fri, 25 Jun 2010 11:41:11 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James wrote:\n> if I upgrade a bloated database using the upgrade-in-place feature, \n> will 8.4.4 recover the bloat and return it to the OS, or do I still \n> have to recover the space manually (like vacuum-full/reindex, or \n> cluster, or copy/drop a table)?\n\nThere's no way for an upgrade in place to do anything about bloat. The \nchanges in 8.4 reduce the potential sources for new bloat (like running \nout of a FSM pages), and the overhead of running VACUUM drops some due \nto things like the \"Partial VACUUM\" changes. But existing bloated \ntables and indexes are moved forward to the new version without any change.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 25 Jun 2010 12:55:17 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "On 6/25/10 9:41 AM, Kevin Grittner wrote:\n> Craig James<[email protected]> wrote:\n>\n>> I always just assumed that lots of backends that would be harmless\n>> if each one was doing very little.\n>\n> Even if each is doing very little, if a large number of them happen\n> to make a request at the same time, you can have problems. This is\n> exactly where a connection pool can massively improve both\n> throughput and response time. If you can arrange it, you want a\n> connection pool which will put a limit on active database\n> transactions and queue requests to start a new transaction until one\n> of the pending ones finishes.\n\nNo, that's doesn't seem to be the case. There is no external activity that triggers this huge spike in usage. It even happens to our backup server when only one of us is using it to do a single query. This problem seems to be triggered by Postgres itself, not by anything external.\n\nPer Tom's suggestion, I think upgrading to 8.4.4 is the answer. I'll learn more when our new hardware comes into use with a shiny new 8.4.4 installation.\n\nCraig\n",
"msg_date": "Fri, 25 Jun 2010 10:01:21 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> On 6/25/10 7:47 AM, Tom Lane wrote:\n>> Any chance of going to 8.4? If this is what I suspect, you really need\n>> this 8.4 fix:\n>> http://archives.postgresql.org/pgsql-committers/2008-06/msg00227.php\n>> which eliminated the thundering-herd behavior that previous releases\n>> exhibit when the sinval queue overflows.\n\n> Yes, there is a chance of upgrading to 8.4.4. I just bought a new server and it has 8.4.4 on it, but it won't be online for a while so I can't compare yet. This may motivate me to upgrade the current servers to 8.4.4 too. I was pleased to see that 8.4 has a new upgrade-in-place feature that means we don't have to dump/restore. That really helps a lot.\n\nI wouldn't put a lot of faith in pg_migrator for an 8.3 to 8.4\nconversion ... it might work, but test it on a copy of your DB first.\nPossibly it'll actually be recommendable in 9.0.\n\n> A question about 8.4.4: I've been having problems with bloat. I thought I'd adjusted the FSM parameters correctly based on advice I got here, but apparently not. 8.4.4 has removed the configurable FSM parameters completely, which is very cool. But ... if I upgrade a bloated database using the upgrade-in-place feature, will 8.4.4 recover the bloat and return it to the OS, or do I still have to recover the space manually (like vacuum-full/reindex, or cluster, or copy/drop a table)?\n\nNo, an in-place upgrade to 8.4 isn't magically going to fix that. This\nmight actually be sufficient reason to stick with the tried&true dump\nand reload method, since you're going to have to do something fairly\nexpensive anyway to clean out the bloat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 Jun 2010 13:01:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Occasional giant spikes in CPU load "
},
{
"msg_contents": "I've got a new server and want to make sure it's running well. Are these pretty decent numbers?\n\n8 cores (2x4 Intel Nehalem 2 GHz)\n12 GB memory\n12 x 7200 SATA 500 GB disks\n3WARE 9650SE-12ML RAID controller with BBU\n WAL on ext2, 2 disks: RAID1 500GB, blocksize=4096\n Database on ext4, 8 disks: RAID10 2TB, stripe size 64K, blocksize=4096\nUbuntu 10.04 LTS (Lucid)\nPostgres 8.4.4\n\npgbench -i -s 100 -U test\npgbench -c 5 -t 20000 -U test\ntps = 4903\npgbench -c 10 -t 10000 -U test\ntps = 4070\npgbench -c20 -t 5000 -U test\ntps = 5789\npgbench -c30 -t 3333 -U test\ntps = 6961\npgbench -c40 -t 2500 -U test\ntps = 2945\n\nThanks,\nCraig\n\n\n",
"msg_date": "Fri, 25 Jun 2010 11:53:01 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "pgbench results on a new server"
},
{
"msg_contents": "On Fri, Jun 25, 2010 at 2:53 PM, Craig James <[email protected]> wrote:\n> I've got a new server and want to make sure it's running well. Are these\n> pretty decent numbers?\n>\n> 8 cores (2x4 Intel Nehalem 2 GHz)\n> 12 GB memory\n> 12 x 7200 SATA 500 GB disks\n> 3WARE 9650SE-12ML RAID controller with BBU\n> WAL on ext2, 2 disks: RAID1 500GB, blocksize=4096\n> Database on ext4, 8 disks: RAID10 2TB, stripe size 64K, blocksize=4096\n> Ubuntu 10.04 LTS (Lucid)\n> Postgres 8.4.4\n>\n> pgbench -i -s 100 -U test\n> pgbench -c 5 -t 20000 -U test\n> tps = 4903\n> pgbench -c 10 -t 10000 -U test\n> tps = 4070\n> pgbench -c20 -t 5000 -U test\n> tps = 5789\n> pgbench -c30 -t 3333 -U test\n> tps = 6961\n> pgbench -c40 -t 2500 -U test\n> tps = 2945\n\nNumbers are okay, but you likely need much longer tests to see how\nthey average out with the bgwriter / checkpoints happening, and keep\ntrack of your IO numbers to see where your dips are. I usually run\npgbench runs, once they seem to get decent numbers, for several hours\nnon-stop. Sometimes days during burn in. Note that running pgbench\non a machine other than the actual db is often a good idea so you're\nnot measuring how fast pgbench can run in contention with your own\ndatabase.\n",
"msg_date": "Fri, 25 Jun 2010 15:01:32 -0400",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results on a new server"
},
{
"msg_contents": "Craig James wrote:\n> I've got a new server and want to make sure it's running well.\n\nAny changes to the postgresql.conf file? Generally you need at least a \nmoderate shared_buffers (1GB or so at a minimum) and checkpoint_segments \n(32 or higher) in order for the standard pgbench test to give good results.\n\n\n> pgbench -c20 -t 5000 -U test\n> tps = 5789\n> pgbench -c30 -t 3333 -U test\n> tps = 6961\n> pgbench -c40 -t 2500 -U test\n> tps = 2945\n\nGeneral numbers are OK, the major drop going from 30 to 40 clients is \nlarger than it should be. I'd suggest running the 40 client count one \nagain to see if that's consistent. If it is, that may just be pgbench \nitself running into a problem. It doesn't handle high client counts \nvery well unless you use the 9.0 version that supports multiple pgbench \nworkers with the \"-j\" option.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 25 Jun 2010 15:03:30 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results on a new server"
},
{
"msg_contents": "On 6/25/10 12:03 PM, Greg Smith wrote:\n> Craig James wrote:\n>> I've got a new server and want to make sure it's running well.\n>\n> Any changes to the postgresql.conf file? Generally you need at least a\n> moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments\n> (32 or higher) in order for the standard pgbench test to give good results.\n\nmax_connections = 500\nshared_buffers = 1000MB\nwork_mem = 128MB\nsynchronous_commit = off\nfull_page_writes = off\nwal_buffers = 256kB\ncheckpoint_segments = 30\neffective_cache_size = 4GB\n\nFor fun I ran it with the installation defaults, and it never got above 1475 TPS.\n\n>> pgbench -c20 -t 5000 -U test\n>> tps = 5789\n>> pgbench -c30 -t 3333 -U test\n>> tps = 6961\n>> pgbench -c40 -t 2500 -U test\n>> tps = 2945\n>\n> General numbers are OK, the major drop going from 30 to 40 clients is\n> larger than it should be. I'd suggest running the 40 client count one\n> again to see if that's consistent.\n\nIt is consistent. When I run pgbench from a different server, I get this:\n\n pgbench -c40 -t 2500 -U test\n tps = 7999\n\n pgbench -c100 -t 1000 -U test\n tps = 6693\n\nCraig\n",
"msg_date": "Mon, 28 Jun 2010 10:12:41 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results on a new server"
},
{
"msg_contents": "On Mon, Jun 28, 2010 at 1:12 PM, Craig James <[email protected]> wrote:\n> On 6/25/10 12:03 PM, Greg Smith wrote:\n>>\n>> Craig James wrote:\n>>>\n>>> I've got a new server and want to make sure it's running well.\n>>\n>> Any changes to the postgresql.conf file? Generally you need at least a\n>> moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments\n>> (32 or higher) in order for the standard pgbench test to give good\n>> results.\n>\n> max_connections = 500\n> shared_buffers = 1000MB\n> work_mem = 128MB\n> synchronous_commit = off\n> full_page_writes = off\n> wal_buffers = 256kB\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n>\n> For fun I ran it with the installation defaults, and it never got above 1475\n> TPS.\n>\n>>> pgbench -c20 -t 5000 -U test\n>>> tps = 5789\n>>> pgbench -c30 -t 3333 -U test\n>>> tps = 6961\n>>> pgbench -c40 -t 2500 -U test\n>>> tps = 2945\n>>\n>> General numbers are OK, the major drop going from 30 to 40 clients is\n>> larger than it should be. I'd suggest running the 40 client count one\n>> again to see if that's consistent.\n>\n> It is consistent. When I run pgbench from a different server, I get this:\n>\n> pgbench -c40 -t 2500 -U test\n> tps = 7999\n>\n> pgbench -c100 -t 1000 -U test\n> tps = 6693\n\n6k tps over 8 7200 rpm disks is quite good imo. synchronous_commit\nsetting is making that possible. building a server that could handle\nthat much was insanely expensive just a few years ago, on relatively\ncheap sorage. that's 21m transactions an hour or ~ half a billion\ntransactions a day (!). running this kind of load 24x7 on postgres\n7.x would have been an enormous headache. how quickly we forget! :-)\n\nyour 'real' tps write, 1475 tps, spread over 4 disks doing the actual\nwriting, is giving you ~ 370 tps/device. not bad at all -- the raid\ncontroller is doing a good job (the raw drive might get 200 or so). I\nbet performance will be somewhat worse with a higher scaling factor\n(say, 500) because there is less locality of writes -- something to\nconsider if you expect your database to get really big.\n\nmerlin\n",
"msg_date": "Tue, 29 Jun 2010 09:18:14 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results on a new server"
},
{
"msg_contents": "Craig James wrote:\n> synchronous_commit = off \n> full_page_writes = off\n\nI don't have any numbers handy on how much turning synchronous_commit \nand full_page_writes off improves performance on a system with a \nbattery-backed write cache. Your numbers are therefore a bit inflated \nagainst similar ones that are doing a regular sync commit. Just \nsomething to keep in mind when comparing against other people's results.\n\nAlso, just as a general comment, increase in work_mem and \neffective_cache_size don't actually do anything to the built-in pgbench \ntest results.\n\n>> General numbers are OK, the major drop going from 30 to 40 clients is\n>> larger than it should be. I'd suggest running the 40 client count one\n>> again to see if that's consistent.\n>\n> It is consistent. When I run pgbench from a different server, I get \n> this:\n>\n> pgbench -c40 -t 2500 -U test\n> tps = 7999\n>\n> pgbench -c100 -t 1000 -U test\n> tps = 6693\n\nLooks like you're just running into the limitations of the old pgbench \ncode failing to keep up with high client count loads when run on the \nsame system as the server. Nothing to be concerned about--that the drop \nis only small with the pgbench client remote says there's not actually a \nserver problem here.\n\nWith that sorted out, your system looks in the normal range for the sort \nof hardware you're using. I'm always concerned about the potential \nreliability issues that come with async commit and turning off full page \nwrites though, so you might want to re-test with those turned on and see \nif you can live with the results.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 30 Jun 2010 02:23:37 +0100",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench results on a new server"
},
{
"msg_contents": "I can query either my PARENT table joined to PRICES, or my VERSION table joined to PRICES, and get an answer in 30-40 msec. But put the two together, it jumps to 4 seconds. What am I missing here? I figured this query would be nearly instantaneous. The VERSION.ISOSMILES and PARENT.ISOSMILES columns both have unique indexes. Instead of using these indexes, it's doing a full-table scan of both tables, even though there can't possibly be more than one match in each table.\n\nI guess I could rewrite this as a UNION of the two subqueries, but that seems contrived.\n\nThis is PG 8.3.10 on Linux.\n\nThanks,\nCraig\n\n\n=> explain analyze select p.price, p.amount, p.units, s.catalogue_id, vn.version_id\n-> from plus p join sample s\n-> on (p.compound_id = s.compound_id and p.supplier_id = s.supplier_id)\n-> join version vn on (s.version_id = vn.version_id) join parent pn\n-> on (s.parent_id = pn.parent_id)\n-> where vn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'\n-> or pn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'\n-> order by price;\n\n Sort (cost=71922.00..71922.00 rows=1 width=19) (actual time=4337.114..4337.122 rows=10 loops=1)\n Sort Key: p.price Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=18407.53..71921.99 rows=1 width=19) (actual time=1122.685..4337.028 rows=10 loops=1)\n -> Hash Join (cost=18407.53..71903.71 rows=4 width=20) (actual time=1122.624..4336.682 rows=7 loops=1)\n Hash Cond: (s.version_id = vn.version_id)\n Join Filter: ((vn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'::text) OR (pn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'::text))\n -> Hash Join (cost=8807.15..44470.73 rows=620264 width=54) (actual time=431.501..2541.329 rows=620264 loops=1)\n Hash Cond: (s.parent_id = pn.parent_id)\n -> Seq Scan on sample s (cost=0.00..21707.64 rows=620264 width=24) (actual time=0.008..471.340 rows=620264 loops=1)\n -> Hash (cost=5335.40..5335.40 rows=277740 width=38) (actual time=431.166..431.166 rows=277740 loops=1)\n -> Seq Scan on parent pn (cost=0.00..5335.40 rows=277740 width=38) (actual time=0.012..195.822 rows=277740 loops=1)\n -> Hash (cost=5884.06..5884.06 rows=297306 width=38) (actual time=467.267..467.267 rows=297306 loops=1)\n -> Seq Scan on version vn (cost=0.00..5884.06 rows=297306 width=38) (actual time=0.017..215.285 rows=297306 loops=1)\n -> Index Scan using i_plus_compound_id on plus p (cost=0.00..4.51 rows=4 width=26) (actual time=0.039..0.041 rows=1 loops=7)\n Index Cond: ((p.supplier_id = s.supplier_id) AND (p.compound_id = s.compound_id))\n Total runtime: 4344.222 ms\n(17 rows)\n\n\nIf I only query the VERSION table, it's very fast:\n\nx=> explain analyze select p.price, p.amount, p.units, s.catalogue_id, vn.version_id\n-> from plus p\n-> join sample s on (p.compound_id = s.compound_id and p.supplier_id = s.supplier_id)\n-> join version vn on (s.version_id = vn.version_id)\n-> where vn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1' order by price;\n\nSort (cost=45.73..45.74 rows=1 width=19) (actual time=32.438..32.448 rows=10 loops=1)\n Sort Key: p.price\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..45.72 rows=1 width=19) (actual time=32.309..32.411 rows=10 loops=1)\n -> Nested Loop (cost=0.00..36.58 rows=2 width=20) (actual time=32.295..32.319 rows=7 loops=1)\n -> Index Scan using i_version_isosmiles on version vn (cost=0.00..8.39 rows=1 width=4) (actual time=32.280..32.281 rows=1 loops=1)\n Index Cond: (isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'::text)\n -> Index Scan using i_sample_version_id on sample s (cost=0.00..28.12 rows=6 width=20) (actual time=0.011..0.024 rows=7 loops=1)\n Index Cond: (s.version_id = vn.version_id)\n -> Index Scan using i_plus_compound_id on plus p (cost=0.00..4.51 rows=4 width=26) (actual time=0.010..0.011 rows=1 loops=7)\n Index Cond: ((p.supplier_id = s.supplier_id) AND (p.compound_id = s.compound_id))\n Total runtime: 32.528 ms\n(12 rows)\n\n\nSame good performance if I only query the PARENT table:\n\nx=> explain analyze select p.price, p.amount, p.units, s.catalogue_id, pn.parent_id from plus p join sample s on (p.compound_id = s.compound_id and p.supplier_id = s.supplier_id) join parent pn on (s.parent_id = pn.parent_id) where pn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1' order by price;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=57.73..57.74 rows=1 width=19) (actual time=43.564..43.564 rows=10 loops=1)\n Sort Key: p.price\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..57.72 rows=1 width=19) (actual time=43.429..43.537 rows=10 loops=1)\n -> Nested Loop (cost=0.00..48.58 rows=2 width=20) (actual time=43.407..43.430 rows=7 loops=1)\n -> Index Scan using i_parent_isosmiles on parent pn (cost=0.00..8.38 rows=1 width=4) (actual time=27.342..27.343 rows=1 loops=1)\n Index Cond: (isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'::text)\n -> Index Scan using i_sample_parent_id on sample s (cost=0.00..40.09 rows=9 width=20) (actual time=16.057..16.070 rows=7 loops=1)\n Index Cond: (s.parent_id = pn.parent_id)\n -> Index Scan using i_plus_compound_id on plus p (cost=0.00..4.51 rows=4 width=26) (actual time=0.010..0.011 rows=1 loops=7)\n Index Cond: ((p.supplier_id = s.supplier_id) AND (p.compound_id = s.compound_id))\n Total runtime: 43.628 ms\n\n\n\n\nx=> \\d version\n Table \"x.version\"\n Column | Type | Modifiers\n------------+---------+-----------\n version_id | integer | not null\n parent_id | integer | not null\n isosmiles | text | not null\n coord_2d | text |\nIndexes:\n \"version_pkey\" PRIMARY KEY, btree (version_id)\n \"i_version_isosmiles\" UNIQUE, btree (isosmiles)\n \"i_version_parent_id\" btree (parent_id)\nForeign-key constraints:\n \"fk_parent\" FOREIGN KEY (parent_id) REFERENCES parent(parent_id) ON DELETE CASCADE\n\nx=> \\d parent\n Table \"x.parent\"\n Column | Type | Modifiers\n-----------+---------+-----------\n parent_id | integer | not null\n isosmiles | text | not null\n coord_2d | text |\nIndexes:\n \"parent_pkey\" PRIMARY KEY, btree (parent_id)\n \"i_parent_isosmiles\" UNIQUE, btree (isosmiles)\n\n=> \\d sample\n Table \"reaxys.sample\"\n Column | Type | Modifiers\n--------------------+---------+-----------------------------------------------------\n sample_id | integer | not null default nextval('sample_id_seq'::regclass)\n sample_id_src | integer |\n parent_id | integer | not null\n version_id | integer | not null\n supplier_id | integer | not null\n catalogue_id | integer | not null\n catalogue_issue_id | integer | not null\n load_id | integer | not null\n load_file_id | integer |\n compound_id | text | not null\n cas_number | text |\n purity | text |\n chemical_name | text |\n url | text |\n price_code | text |\n comment | text |\n salt_comment | text |\nIndexes:\n \"sample_pkey\" PRIMARY KEY, btree (sample_id)\n \"i_sample_casno\" btree (cas_number)\n \"i_sample_catalogue_id\" btree (catalogue_id)\n \"i_sample_catalogue_issue_id\" btree (catalogue_issue_id)\n \"i_sample_chem_name\" btree (chemical_name)\n \"i_sample_compound_id\" btree (compound_id)\n \"i_sample_load_id\" btree (load_id)\n \"i_sample_parent_id\" btree (parent_id)\n \"i_sample_sample_id_src\" btree (sample_id_src)\n \"i_sample_supplier_id\" btree (supplier_id)\n \"i_sample_version_id\" btree (version_id)\nForeign-key constraints:\n \"fk_item\" FOREIGN KEY (version_id) REFERENCES version(version_id) ON DELETE CASCADE\n",
"msg_date": "Thu, 05 Aug 2010 11:34:45 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Two fast searches turn slow when used with OR clause"
},
{
"msg_contents": "On Thu, Aug 5, 2010 at 2:34 PM, Craig James <[email protected]> wrote:\n> => explain analyze select p.price, p.amount, p.units, s.catalogue_id,\n> vn.version_id\n> -> from plus p join sample s\n> -> on (p.compound_id = s.compound_id and p.supplier_id = s.supplier_id)\n> -> join version vn on (s.version_id = vn.version_id) join parent pn\n> -> on (s.parent_id = pn.parent_id)\n> -> where vn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'\n> -> or pn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1'\n> -> order by price;\n\nWell, you can't evaluate the WHERE clause here until you've joined {s vn pn}.\n\n> If I only query the VERSION table, it's very fast:\n>\n> x=> explain analyze select p.price, p.amount, p.units, s.catalogue_id,\n> vn.version_id\n> -> from plus p\n> -> join sample s on (p.compound_id = s.compound_id and p.supplier_id =\n> s.supplier_id)\n> -> join version vn on (s.version_id = vn.version_id)\n> -> where vn.isosmiles = 'Fc1ncccc1B1OC(C)(C)C(C)(C)O1' order by price;\n\nBut here you can push the WHERE clause all the way down to the vn\ntable, and evaluate it right at the get go, which is pretty much\nexactly what is happening.\n\nIn the first case, you have to join all 297,306 vn rows against s,\nbecause they could be interesting if the other half of the WHERE\nclause turns out to hold. In the second case, you can throw away\n297,305 of those 297,306 rows before doing anything else, because\nthere's no possibility that they can ever be interesting.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Tue, 17 Aug 2010 23:22:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two fast searches turn slow when used with OR clause"
}
] |
[
{
"msg_contents": "Actually, swapping the order of the conditions did in fact make some\ndifference, strange.\n\nI ran the query a couple of times for each variation to see if the\ndifference in speed was just a coincidence or a pattern. Looks like the\nspeed really is different.\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1384.401..1384.402\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1384.399..1384.399\nrows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1384.431 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1710.166..1710.167\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1710.164..1710.164\nrows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1710.200 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1366.526..1366.527\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1366.525..1366.525\nrows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1366.552 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1685.395..1685.396\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1685.394..1685.394\nrows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1685.423 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1403.904..1403.905\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1403.903..1403.903\nrows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1403.931 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1689.014..1689.014\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1689.012..1689.012\nrows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1689.041 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1378.322..1378.323\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1378.320..1378.320\nrows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1378.349 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1696.830..1696.831\nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions\n (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1696.828..1696.828\nrows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1696.858 ms\n(4 rows)\n\n\n\n2010/4/6 <[email protected]>\n\n>\n> I mean the time you spent on prune which one is cheaper might be another\n> cost.\n> Thanks much!\n>\n> Xuefeng Zhu (Sherry)\n> Crown Consulting Inc. -- Oracle DBA\n> AIM Lab Data Team\n> (703) 925-3192\n>\n>\n>\n> *Sherry CTR Zhu/AWA/CNTR/FAA*\n> AJR-32, Aeronautical Information Mgmt Group\n>\n> 04/06/2010 03:13 PM\n> To\n> Robert Haas <[email protected]>\n> cc\n> Joel Jacobson <[email protected]>\n> Subject\n> Re: [PERFORM] LIMIT causes planner to do Index Scan using a less\n> optimal indexLink<Notes:///852576860052CAFA/DABA975B9FB113EB852564B5001283EA/15C2483F84B5A6D0852576FD006911AC>\n>\n>\n>\n> Have you tried before?\n>\n> Thanks much!\n>\n> Xuefeng Zhu (Sherry)\n> Crown Consulting Inc. -- Oracle DBA\n> AIM Lab Data Team\n> (703) 925-3192\n>\n>\n>\n> *Robert Haas <[email protected]>*\n>\n> 04/06/2010 03:07 PM\n> To\n> Sherry CTR Zhu/AWA/CNTR/FAA@FAA\n> cc\n> Joel Jacobson <[email protected]>\n> Subject\n> Re: [PERFORM] LIMIT causes planner to do Index Scan using a less\n> optimal index\n>\n>\n>\n>\n> On Tue, Apr 6, 2010 at 3:05 PM, <*[email protected]*<[email protected]>>\n> wrote:\n>\n> Just curious,\n>\n> Switch the where condition to try to make difference.\n>\n> how about change\n> ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> to\n> ( (currency = 'SEK'::bpchar) AND (accountid = 108) ).\n>\n>\n> In earlier version of Oracle, this was common knowledge that optimizer took\n> the last condition index to use.\n>\n> Ignore me if you think this is no sence. I didn't have a time to read your\n> guys' all emails.\n>\n> PostgreSQL doesn't behave that way - it guesses which order will be\n> cheaper.\n>\n> ...Robert\n>\n>\n\n\n-- \nBest regards,\n\nJoel Jacobson\nGlue Finance\n\nE: [email protected]\nT: +46 70 360 38 01\n\nPostal address:\nGlue Finance AB\nBox 549\n114 11 Stockholm\nSweden\n\nVisiting address:\nGlue Finance AB\nBirger Jarlsgatan 14\n114 34 Stockholm\nSweden\n\nActually, swapping the order of the conditions did in fact make some difference, strange.I ran the query a couple of times for each variation to see if the difference in speed was just a coincidence or a pattern. Looks like the speed really is different.\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1384.401..1384.402 rows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1384.399..1384.399 rows=1 loops=1) Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1384.431 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1710.166..1710.167 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1710.164..1710.164 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108)) Total runtime: 1710.200 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1366.526..1366.527 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1366.525..1366.525 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar)) Total runtime: 1366.552 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1685.395..1685.396 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1685.394..1685.394 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108)) Total runtime: 1685.423 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1403.904..1403.905 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1403.903..1403.903 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar)) Total runtime: 1403.931 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1689.014..1689.014 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1689.012..1689.012 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108)) Total runtime: 1689.041 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1378.322..1378.323 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1378.320..1378.320 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar)) Total runtime: 1378.349 ms(4 rows)EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1696.830..1696.831 rows=1 loops=1) -> Index Scan using transactions_pkey on transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1696.828..1696.828 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108)) Total runtime: 1696.858 ms(4 rows)2010/4/6 <[email protected]>\n\nI mean the time you spent on prune which\none is cheaper might be another cost.\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nSherry CTR Zhu/AWA/CNTR/FAA\nAJR-32, Aeronautical Information Mgmt\nGroup\n04/06/2010 03:13 PM\n\n\n\n\nTo\nRobert Haas <[email protected]>\n\n\ncc\nJoel Jacobson <[email protected]>\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal\nindexLink\n\n\n\n\n\n\n\nHave you tried before?\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nRobert Haas <[email protected]>\n\n04/06/2010 03:07 PM\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\n\n\ncc\nJoel Jacobson <[email protected]>\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal\nindex\n\n\n\n\n\n\n\n\nOn Tue, Apr 6, 2010 at 3:05 PM, <[email protected]>\nwrote:\n\nJust curious, \n\nSwitch the where condition to try to make difference.\n\n\nhow about change \n((accountid = 108) AND (currency = 'SEK'::bpchar))\n\nto \n( (currency = 'SEK'::bpchar)\nAND (accountid = 108)\n). \n\n\nIn earlier version of Oracle, this was common knowledge that optimizer\ntook the last condition index to use. \n\nIgnore me if you think this is no sence. I didn't have a time to\nread your guys' all emails. \n\nPostgreSQL doesn't behave that way - it guesses which order will be cheaper.\n\n...Robert\n\n-- Best regards,Joel JacobsonGlue FinanceE: [email protected]: +46 70 360 38 01Postal address:\nGlue Finance ABBox 549114 11 StockholmSwedenVisiting address:Glue Finance ABBirger Jarlsgatan 14114 34 StockholmSweden",
"msg_date": "Wed, 7 Apr 2010 00:30:17 +0200",
"msg_from": "Joel Jacobson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
},
{
"msg_contents": "On Tue, Apr 6, 2010 at 6:30 PM, Joel Jacobson <[email protected]> wrote:\n\n> Actually, swapping the order of the conditions did in fact make some\n> difference, strange.\n>\n> I ran the query a couple of times for each variation to see if the\n> difference in speed was just a coincidence or a pattern. Looks like the\n> speed really is different.\n>\n\nWow. That's very surprising to me...\n\n...Robert\n\nOn Tue, Apr 6, 2010 at 6:30 PM, Joel Jacobson <[email protected]> wrote:\nActually, swapping the order of the conditions did in fact make some difference, strange.I ran the query a couple of times for each variation to see if the difference in speed was just a coincidence or a pattern. Looks like the speed really is different.\nWow. That's very surprising to me......Robert",
"msg_date": "Tue, 6 Apr 2010 20:22:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
},
{
"msg_contents": "Guys,\n\n Thanks for trying and opening your mind. \n If you want to know how Oracle addressed this issue, here it is: index \non two columns. I remember that they told me in the training postgres has \nno this kind of index, can someone clarify?\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\nJoel Jacobson <[email protected]> \n04/06/2010 06:30 PM\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA, [email protected]\ncc\nRobert Haas <[email protected]>\nSubject\nRe: [PERFORM] LIMIT causes planner to do Index Scan using a less optimal \nindex\n\n\n\n\n\n\nActually, swapping the order of the conditions did in fact make some \ndifference, strange.\n\nI ran the query a couple of times for each variation to see if the \ndifference in speed was just a coincidence or a pattern. Looks like the \nspeed really is different.\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = \n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1384.401..1384.402 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1384.399..1384.399 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1384.431 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = \n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1710.166..1710.167 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1710.164..1710.164 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1710.200 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = \n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1366.526..1366.527 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1366.525..1366.525 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1366.552 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = \n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1685.395..1685.396 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1685.394..1685.394 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1685.423 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = \n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1403.904..1403.905 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1403.903..1403.903 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1403.931 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = \n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1689.014..1689.014 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1689.012..1689.012 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1689.041 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID = \n108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1378.322..1378.323 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1378.320..1378.320 rows=1 loops=1)\n Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1378.349 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency = \n'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1696.830..1696.831 \nrows=1 loops=1)\n -> Index Scan using transactions_pkey on transactions \n (cost=0.00..1260254.03 rows=10862 width=4) (actual \ntime=1696.828..1696.828 rows=1 loops=1)\n Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1696.858 ms\n(4 rows)\n\n\n\n2010/4/6 <[email protected]>\n\nI mean the time you spent on prune which one is cheaper might be another \ncost. \nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\nSherry CTR Zhu/AWA/CNTR/FAA \nAJR-32, Aeronautical Information Mgmt Group \n04/06/2010 03:13 PM \n\n\nTo\nRobert Haas <[email protected]> \ncc\nJoel Jacobson <[email protected]> \nSubject\nRe: [PERFORM] LIMIT causes planner to do Index Scan using a less \noptimal indexLink\n\n\n\n\n\n\n\nHave you tried before? \n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\nRobert Haas <[email protected]> \n04/06/2010 03:07 PM \n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA \ncc\nJoel Jacobson <[email protected]> \nSubject\nRe: [PERFORM] LIMIT causes planner to do Index Scan using a less \noptimal index\n\n\n\n\n\n\n\n\nOn Tue, Apr 6, 2010 at 3:05 PM, <[email protected]> wrote: \n\nJust curious, \n\nSwitch the where condition to try to make difference. \n\nhow about change \n((accountid = 108) AND (currency = 'SEK'::bpchar)) \nto \n( (currency = 'SEK'::bpchar) AND (accountid = 108) ). \n\n\nIn earlier version of Oracle, this was common knowledge that optimizer \ntook the last condition index to use. \n\nIgnore me if you think this is no sence. I didn't have a time to read \nyour guys' all emails. \n\nPostgreSQL doesn't behave that way - it guesses which order will be \ncheaper.\n\n...Robert \n\n\n\n\n-- \nBest regards,\n\nJoel Jacobson\nGlue Finance\n\nE: [email protected]\nT: +46 70 360 38 01\n\nPostal address:\nGlue Finance AB\nBox 549\n114 11 Stockholm\nSweden\n\nVisiting address:\nGlue Finance AB\nBirger Jarlsgatan 14\n114 34 Stockholm\nSweden\n\nGuys,\n\n Thanks for trying and opening\nyour mind. \n If you want to know how Oracle\naddressed this issue, here it is: index on two columns. I remember\nthat they told me in the training postgres has no this kind of index, can\nsomeone clarify?\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nJoel Jacobson <[email protected]>\n\n04/06/2010 06:30 PM\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA, [email protected]\n\n\ncc\nRobert Haas <[email protected]>\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal\nindex\n\n\n\n\n\n\n\n\nActually, swapping the order of the conditions did in\nfact make some difference, strange.\n\nI ran the query a couple of times for each variation to\nsee if the difference in speed was just a coincidence or a pattern. Looks\nlike the speed really is different.\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1384.401..1384.402 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1384.399..1384.399 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1384.431 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1710.166..1710.167 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1710.164..1710.164 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1710.200 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1366.526..1366.527 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1366.525..1366.525 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1366.552 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1685.395..1685.396 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1685.394..1685.394 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1685.423 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1403.904..1403.905 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1403.903..1403.903 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1403.931 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1689.014..1689.014 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1689.012..1689.012 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1689.041 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1378.322..1378.323 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1378.320..1378.320 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1378.349 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1696.830..1696.831 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1696.828..1696.828 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1696.858 ms\n(4 rows)\n\n\n\n2010/4/6 <[email protected]>\n\nI mean the time you spent on prune which one is cheaper might be another\ncost. \nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nSherry CTR Zhu/AWA/CNTR/FAA\n\nAJR-32, Aeronautical Information Mgmt\nGroup \n04/06/2010 03:13 PM\n\n\n\n\n\n\nTo\nRobert Haas <[email protected]>\n\n\n\ncc\nJoel Jacobson <[email protected]>\n\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal indexLink\n\n\n\n\n\n\n\n\n\nHave you tried before? \n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nRobert Haas <[email protected]>\n\n04/06/2010 03:07 PM\n\n\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\n\n\n\ncc\nJoel Jacobson <[email protected]>\n\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal index\n\n\n\n\n\n\n\n\n\n\nOn Tue, Apr 6, 2010 at 3:05 PM, <[email protected]>\nwrote: \n\nJust curious, \n\nSwitch the where condition to try to make difference.\n\n\nhow about change \n((accountid = 108) AND (currency = 'SEK'::bpchar))\n\nto \n( (currency = 'SEK'::bpchar)\nAND (accountid = 108)\n). \n\n\nIn earlier version of Oracle, this was common knowledge that optimizer\ntook the last condition index to use. \n\nIgnore me if you think this is no sence. I didn't have a time to\nread your guys' all emails. \n\nPostgreSQL doesn't behave that way - it guesses which order will be cheaper.\n\n...Robert \n\n\n\n\n-- \nBest regards,\n\nJoel Jacobson\nGlue Finance\n\nE: [email protected]\nT: +46 70 360 38 01\n\nPostal address:\nGlue Finance AB\nBox 549\n114 11 Stockholm\nSweden\n\nVisiting address:\nGlue Finance AB\nBirger Jarlsgatan 14\n114 34 Stockholm\nSweden",
"msg_date": "Wed, 7 Apr 2010 08:20:54 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less \toptimal index"
},
{
"msg_contents": "On Wed, Apr 7, 2010 at 1:20 PM, <[email protected]> wrote:\n\n>\n> Guys,\n>\n> Thanks for trying and opening your mind.\n> If you want to know how Oracle addressed this issue, here it is: index\n> on two columns. I remember that they told me in the training postgres has\n> no this kind of index, can someone clarify?\n>\n>\nlies. postgresql allows you indices on multiple columns. What it does not\nhave, is index on multiple tables.\n\n\n\n-- \nGJ\n\nOn Wed, Apr 7, 2010 at 1:20 PM, <[email protected]> wrote:\nGuys,\n\n Thanks for trying and opening\nyour mind. \n If you want to know how Oracle\naddressed this issue, here it is: index on two columns. I remember\nthat they told me in the training postgres has no this kind of index, can\nsomeone clarify?\n\nlies. postgresql allows you indices on multiple columns. What it does not have, is index on multiple tables.-- GJ",
"msg_date": "Wed, 7 Apr 2010 13:51:17 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
},
{
"msg_contents": "Do you mean one index on two columns?\n\nsomething like this: create index idx1 on tb1(col1, col2);\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\nGrzegorz Jaśkiewicz <[email protected]> \n04/07/2010 08:51 AM\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\ncc\nJoel Jacobson <[email protected]>, [email protected], \nRobert Haas <[email protected]>\nSubject\nRe: [PERFORM] LIMIT causes planner to do Index Scan using a less optimal \nindex\n\n\n\n\n\n\n\n\nOn Wed, Apr 7, 2010 at 1:20 PM, <[email protected]> wrote:\n\nGuys, \n\n Thanks for trying and opening your mind. \n If you want to know how Oracle addressed this issue, here it is: index \non two columns. I remember that they told me in the training postgres has \nno this kind of index, can someone clarify? \n\n\nlies. postgresql allows you indices on multiple columns. What it does not \nhave, is index on multiple tables.\n\n\n\n-- \nGJ\n\nDo you mean one index on two columns?\n\nsomething like this: create index\nidx1 on tb1(col1, col2);\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nGrzegorz Jaśkiewicz <[email protected]>\n\n04/07/2010 08:51 AM\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\n\n\ncc\nJoel Jacobson <[email protected]>,\[email protected], Robert Haas <[email protected]>\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal\nindex\n\n\n\n\n\n\n\n\n\n\nOn Wed, Apr 7, 2010 at 1:20 PM, <[email protected]>\nwrote:\n\nGuys, \n\n Thanks for trying and opening your mind. \n\n If you want to know how Oracle addressed this issue, here it is:\n index on two columns. I remember that they told me in the training\npostgres has no this kind of index, can someone clarify?\n\n\n\nlies. postgresql allows you indices on multiple columns. What it does not\nhave, is index on multiple tables.\n\n\n\n-- \nGJ",
"msg_date": "Wed, 7 Apr 2010 09:08:36 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less \toptimal index"
},
{
"msg_contents": "2010/4/7 <[email protected]>\n\n>\n> Do you mean one index on two columns?\n>\n> something like this: create index idx1 on tb1(col1, col2);\n>\nyup :) It would be quite useless without that feature.\nDon't listen to oracle folks, they obviously know not much about products\nothers than oracle db(s).\n\n\n\n\n\n-- \nGJ\n\n2010/4/7 <[email protected]>\nDo you mean one index on two columns?\n\nsomething like this: create index\nidx1 on tb1(col1, col2);\nyup :) It would be quite useless without that feature. Don't listen to oracle folks, they obviously know not much about products others than oracle db(s). \n-- GJ",
"msg_date": "Wed, 7 Apr 2010 14:12:35 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
},
{
"msg_contents": "Please just let me know if Postgres can do this kind of index or not. \n\ncreate index idx1 on tb1(col1, col2)\n\nThen later we can find it is useful or useless. \n\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\nGrzegorz Jaśkiewicz <[email protected]> \nSent by: [email protected]\n04/07/2010 09:12 AM\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\ncc\nJoel Jacobson <[email protected]>, [email protected], \nRobert Haas <[email protected]>\nSubject\nRe: [PERFORM] LIMIT causes planner to do Index Scan using a less optimal \nindex\n\n\n\n\n\n\n\n\n2010/4/7 <[email protected]>\n\nDo you mean one index on two columns? \n\nsomething like this: create index idx1 on tb1(col1, col2); \nyup :) It would be quite useless without that feature. \nDon't listen to oracle folks, they obviously know not much about products \nothers than oracle db(s). \n \n\n\n\n\n-- \nGJ\n\nPlease just let me know if Postgres\ncan do this kind of index or not. \n\ncreate index idx1 on tb1(col1, col2)\n\nThen later we can find it is useful\nor useless. \n\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nGrzegorz Jaśkiewicz <[email protected]>\n\nSent by: [email protected]\n04/07/2010 09:12 AM\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\n\n\ncc\nJoel Jacobson <[email protected]>,\[email protected], Robert Haas <[email protected]>\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal\nindex\n\n\n\n\n\n\n\n\n\n\n2010/4/7 <[email protected]>\n\nDo you mean one index on two columns? \n\nsomething like this: create index idx1 on tb1(col1, col2);\n\nyup :) It would be quite useless without that feature.\n\nDon't listen to oracle folks, they obviously know not much about products\nothers than oracle db(s). \n \n\n\n\n\n-- \nGJ",
"msg_date": "Wed, 7 Apr 2010 09:22:51 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less \toptimal index"
},
{
"msg_contents": "On Wed, 7 Apr 2010, [email protected] wrote:\n> Please just let me know if Postgres can do this kind of index or not.\n>\n> create index idx1 on tb1(col1, col2)\n>\n> Then later we can find it is useful or useless.\n\nHave you tried it?\n\n> Grzegorz Jaśkiewicz <[email protected]> wrote: \n>> something like this: create index idx1 on tb1(col1, col2);\n>> yup :)\n\nFor those of you who are not native English speakers, \"Yup\" is a synonym \nfor \"Yes.\"\n\nMatthew\n\n-- \nRichards' Laws of Data Security:\n 1. Don't buy a computer.\n 2. If you must buy a computer, don't turn it on.",
"msg_date": "Wed, 7 Apr 2010 14:36:44 +0100 (BST)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
},
{
"msg_contents": "Hi Xuefeng,\n\nYou have misunderstood the problem.\n\nThe index used in the query not containing the \"LIMIT 1\" part, is \"\nindex_transactions_accountid_currency\", which is indeed a two column index.\n\nThe problem is this index is not used when using \"LIMIT 1\".\n\n\n\n2010/4/7 <[email protected]>\n\n>\n> Guys,\n>\n> Thanks for trying and opening your mind.\n> If you want to know how Oracle addressed this issue, here it is: index\n> on two columns. I remember that they told me in the training postgres has\n> no this kind of index, can someone clarify?\n>\n> Thanks much!\n>\n> Xuefeng Zhu (Sherry)\n> Crown Consulting Inc. -- Oracle DBA\n> AIM Lab Data Team\n> (703) 925-3192\n>\n>\n>\n> *Joel Jacobson <[email protected]>*\n>\n> 04/06/2010 06:30 PM\n> To\n> Sherry CTR Zhu/AWA/CNTR/FAA@FAA, [email protected]\n> cc\n> Robert Haas <[email protected]>\n> Subject\n> Re: [PERFORM] LIMIT causes planner to do Index Scan using a less\n> optimal index\n>\n>\n>\n>\n> Actually, swapping the order of the conditions did in fact make some\n> difference, strange.\n>\n> I ran the query a couple of times for each variation to see if the\n> difference in speed was just a coincidence or a pattern. Looks like the\n> speed really is different.\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n> 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1384.401..1384.402\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1384.399..1384.399\n> rows=1 loops=1)\n> Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> Total runtime: 1384.431 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n> 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1710.166..1710.167\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1710.164..1710.164\n> rows=1 loops=1)\n> Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n> Total runtime: 1710.200 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n> 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1366.526..1366.527\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1366.525..1366.525\n> rows=1 loops=1)\n> Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> Total runtime: 1366.552 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n> 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1685.395..1685.396\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1685.394..1685.394\n> rows=1 loops=1)\n> Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n> Total runtime: 1685.423 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n> 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1403.904..1403.905\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1403.903..1403.903\n> rows=1 loops=1)\n> Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> Total runtime: 1403.931 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n> 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1689.014..1689.014\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1689.012..1689.012\n> rows=1 loops=1)\n> Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n> Total runtime: 1689.041 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =\n> 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1378.322..1378.323\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1378.320..1378.320\n> rows=1 loops=1)\n> Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> Total runtime: 1378.349 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =\n> 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..116.02 rows=1 width=4) (actual time=1696.830..1696.831\n> rows=1 loops=1)\n> -> Index Scan using transactions_pkey on transactions\n> (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1696.828..1696.828\n> rows=1 loops=1)\n> Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))\n> Total runtime: 1696.858 ms\n> (4 rows)\n>\n>\n>\n> 2010/4/6 <*[email protected]* <[email protected]>>\n>\n> I mean the time you spent on prune which one is cheaper might be another\n> cost.\n> Thanks much!\n>\n> Xuefeng Zhu (Sherry)\n> Crown Consulting Inc. -- Oracle DBA\n> AIM Lab Data Team\n> (703) 925-3192\n>\n>\n> *Sherry CTR Zhu/AWA/CNTR/FAA*\n> AJR-32, Aeronautical Information Mgmt Group\n>\n> 04/06/2010 03:13 PM\n>\n> To\n> Robert Haas <*[email protected]* <[email protected]>>\n> cc\n> Joel Jacobson <*[email protected]* <[email protected]>>\n> Subject\n> Re: [PERFORM] LIMIT causes planner to do Index Scan using a less\n> optimal index*Link*<Notes:///852576860052CAFA/DABA975B9FB113EB852564B5001283EA/15C2483F84B5A6D0852576FD006911AC>\n>\n>\n>\n>\n>\n> Have you tried before?\n>\n> Thanks much!\n>\n> Xuefeng Zhu (Sherry)\n> Crown Consulting Inc. -- Oracle DBA\n> AIM Lab Data Team\n> (703) 925-3192\n>\n>\n> *Robert Haas <**[email protected]* <[email protected]>*>*\n>\n> 04/06/2010 03:07 PM\n>\n> To\n> Sherry CTR Zhu/AWA/CNTR/FAA@FAA\n> cc\n> Joel Jacobson <*[email protected]* <[email protected]>>\n> Subject\n> Re: [PERFORM] LIMIT causes planner to do Index Scan using a less\n> optimal index\n>\n>\n>\n>\n>\n>\n> On Tue, Apr 6, 2010 at 3:05 PM, <*[email protected]*<[email protected]>>\n> wrote:\n>\n> Just curious,\n>\n> Switch the where condition to try to make difference.\n>\n> how about change\n> ((accountid = 108) AND (currency = 'SEK'::bpchar))\n> to\n> ( (currency = 'SEK'::bpchar) AND (accountid = 108) ).\n>\n>\n> In earlier version of Oracle, this was common knowledge that optimizer took\n> the last condition index to use.\n>\n> Ignore me if you think this is no sence. I didn't have a time to read your\n> guys' all emails.\n>\n> PostgreSQL doesn't behave that way - it guesses which order will be\n> cheaper.\n>\n> ...Robert\n>\n>\n>\n>\n> --\n> Best regards,\n>\n> Joel Jacobson\n> Glue Finance\n>\n> E: *[email protected]* <[email protected]>\n> T: +46 70 360 38 01\n>\n> Postal address:\n> Glue Finance AB\n> Box 549\n> 114 11 Stockholm\n> Sweden\n>\n> Visiting address:\n> Glue Finance AB\n> Birger Jarlsgatan 14\n> 114 34 Stockholm\n> Sweden\n>\n>\n\n\n-- \nBest regards,\n\nJoel Jacobson\nGlue Finance\n\nE: [email protected]\nT: +46 70 360 38 01\n\nPostal address:\nGlue Finance AB\nBox 549\n114 11 Stockholm\nSweden\n\nVisiting address:\nGlue Finance AB\nBirger Jarlsgatan 14\n114 34 Stockholm\nSweden\n\nHi Xuefeng,You have misunderstood the problem.The index used in the query not containing the \"LIMIT 1\" part, is \"index_transactions_accountid_currency\", which is indeed a two column index.\nThe problem is this index is not used when using \"LIMIT 1\".\n\n2010/4/7 <[email protected]>\nGuys,\n\n Thanks for trying and opening\nyour mind. \n If you want to know how Oracle\naddressed this issue, here it is: index on two columns. I remember\nthat they told me in the training postgres has no this kind of index, can\nsomeone clarify?\n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nJoel Jacobson <[email protected]>\n\n04/06/2010 06:30 PM\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA, [email protected]\n\n\ncc\nRobert Haas <[email protected]>\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal\nindex\n\n\n\n\n\n\n\n\nActually, swapping the order of the conditions did in\nfact make some difference, strange.\n\nI ran the query a couple of times for each variation to\nsee if the difference in speed was just a coincidence or a pattern. Looks\nlike the speed really is different.\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1384.401..1384.402 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1384.399..1384.399 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1384.431 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1710.166..1710.167 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1710.164..1710.164 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1710.200 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1366.526..1366.527 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1366.525..1366.525 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1366.552 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1685.395..1685.396 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1685.394..1685.394 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1685.423 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1403.904..1403.905 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1403.903..1403.903 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1403.931 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1689.014..1689.014 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1689.012..1689.012 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1689.041 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((AccountID = 108) AND (Currency = 'SEK')) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1378.322..1378.323 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1378.320..1378.320 rows=1 loops=1)\n Filter: ((accountid\n= 108) AND (currency = 'SEK'::bpchar))\n Total runtime: 1378.349 ms\n(4 rows)\n\nEXPLAIN ANALYZE SELECT TransactionID FROM Transactions\nWHERE ((Currency = 'SEK') AND (AccountID = 108)) ORDER BY TransactionID\nLIMIT 1;\n \n \n \n QUERY PLAN \n \n \n \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..116.02 rows=1 width=4) (actual\ntime=1696.830..1696.831 rows=1 loops=1)\n -> Index Scan using transactions_pkey\non transactions (cost=0.00..1260254.03 rows=10862 width=4) (actual\ntime=1696.828..1696.828 rows=1 loops=1)\n Filter: ((currency =\n'SEK'::bpchar) AND (accountid = 108))\n Total runtime: 1696.858 ms\n(4 rows)\n\n\n\n2010/4/6 <[email protected]>\n\nI mean the time you spent on prune which one is cheaper might be another\ncost. \nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nSherry CTR Zhu/AWA/CNTR/FAA\n\nAJR-32, Aeronautical Information Mgmt\nGroup \n04/06/2010 03:13 PM\n\n\n\n\n\n\nTo\nRobert Haas <[email protected]>\n\n\n\ncc\nJoel Jacobson <[email protected]>\n\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal indexLink\n\n\n\n\n\n\n\n\n\n\nHave you tried before? \n\nThanks much!\n\nXuefeng Zhu (Sherry)\nCrown Consulting Inc. -- Oracle DBA\nAIM Lab Data Team \n(703) 925-3192\n\n\n\n\n\n\nRobert Haas <[email protected]>\n\n04/06/2010 03:07 PM\n\n\n\n\n\n\nTo\nSherry CTR Zhu/AWA/CNTR/FAA@FAA\n\n\n\ncc\nJoel Jacobson <[email protected]>\n\n\n\nSubject\nRe: [PERFORM] LIMIT causes planner to\ndo Index Scan using a less optimal index\n\n\n\n\n\n\n\n\n\n\nOn Tue, Apr 6, 2010 at 3:05 PM, <[email protected]>\nwrote: \n\nJust curious, \n\nSwitch the where condition to try to make difference.\n\n\nhow about change \n((accountid = 108) AND (currency = 'SEK'::bpchar))\n\nto \n( (currency = 'SEK'::bpchar)\nAND (accountid = 108)\n). \n\n\nIn earlier version of Oracle, this was common knowledge that optimizer\ntook the last condition index to use. \n\nIgnore me if you think this is no sence. I didn't have a time to\nread your guys' all emails. \n\nPostgreSQL doesn't behave that way - it guesses which order will be cheaper.\n\n...Robert \n\n\n\n\n-- \nBest regards,\n\nJoel Jacobson\nGlue Finance\n\nE: [email protected]\nT: +46 70 360 38 01\n\nPostal address:\nGlue Finance AB\nBox 549\n114 11 Stockholm\nSweden\n\nVisiting address:\nGlue Finance AB\nBirger Jarlsgatan 14\n114 34 Stockholm\nSweden\n-- Best regards,Joel JacobsonGlue FinanceE: [email protected]: +46 70 360 38 01Postal address:\nGlue Finance ABBox 549114 11 StockholmSwedenVisiting address:Glue Finance ABBirger Jarlsgatan 14114 34 StockholmSweden",
"msg_date": "Wed, 7 Apr 2010 17:52:30 +0200",
"msg_from": "Joel Jacobson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT causes planner to do Index Scan using a less optimal index"
}
] |
[
{
"msg_contents": "Hi there!\n\nI have some mysterious slow downs with ORDER BY and LIMIT. When LIMIT\ngetting greater than some value (greater than 3 in my case), query\ntakes 4-5 secs instead of 0.25ms. All of the necessary indexes are in\nplace. I have no idea what to do, so any advices are welcome!\n\nHere my queries and explain analyzes;\n\nFirst Query with LIMIT 3 (fast)\n-------------\nexplain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n\"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n\"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n(\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\nWHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n\"core_object\".\"id\" DESC LIMIT 3;\n\n Limit (cost=0.00..9.57 rows=3 width=4) (actual time=0.090..0.138\nrows=3 loops=1)\n -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) (actual\ntime=0.088..0.136 rows=3 loops=1)\n Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n -> Nested Loop (cost=0.00..972804.02 rows=344125 width=4)\n(actual time=0.056..0.095 rows=3 loops=1)\n -> Index Scan Backward using\nplugins_plugin_addr_oid_id on plugins_plugin_addr\n(cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..0.032\nrows=3 loops=1)\n -> Index Scan using plugins_guide_address_pkey on\nplugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\ntime=0.017..0.018 rows=1 loops=3)\n Index Cond: (plugins_guide_address.id =\nplugins_plugin_addr.address_id)\n Filter: (plugins_guide_address.city_id = 4535)\n -> Index Scan using core_object_pkey_desc on core_object\n(cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.026..0.028\nrows=3 loops=1)\n Total runtime: 0.244 ms\n(10 rows)\n\nSecond Query, the same, but with LIMIT 4 (slooooow)\n-------------\nexplain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n\"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n\"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n(\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\nWHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n\"core_object\".\"id\" DESC LIMIT 4;\n\n Limit (cost=0.00..12.76 rows=4 width=4) (actual time=0.091..4436.795\nrows=4 loops=1)\n -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) (actual\ntime=0.089..4436.791 rows=4 loops=1)\n Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n -> Nested Loop (cost=0.00..972804.02 rows=344125 width=4)\n(actual time=0.056..3988.249 rows=4 loops=1)\n -> Index Scan Backward using\nplugins_plugin_addr_oid_id on plugins_plugin_addr\n(cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..329.942\nrows=1244476 loops=1)\n -> Index Scan using plugins_guide_address_pkey on\nplugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\ntime=0.003..0.003 rows=0 loops=1244476)\n Index Cond: (plugins_guide_address.id =\nplugins_plugin_addr.address_id)\n Filter: (plugins_guide_address.city_id = 4535)\n -> Index Scan using core_object_pkey_desc on core_object\n(cost=0.00..113516.08 rows=3091134 width=4) (actual\ntime=0.027..284.195 rows=1244479 loops=1)\n Total runtime: 4436.894 ms\n(10 rows)\n",
"msg_date": "Tue, 6 Apr 2010 17:42:34 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": true,
"msg_subject": "significant slow down with various LIMIT"
},
{
"msg_contents": "norn <[email protected]> wrote:\n \n> I have some mysterious slow downs with ORDER BY and LIMIT. When\n> LIMIT getting greater than some value (greater than 3 in my case),\n> query takes 4-5 secs instead of 0.25ms. All of the necessary\n> indexes are in place. I have no idea what to do, so any advices\n> are welcome!\n \nCould you show us the output from \"select version();\", describe your\nhardware and OS, and show us the contents of your postgresql.conf\nfile (with all comments removed)? We can then give more concrete\nadvice than is possible with the information provided so far.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Thu, 08 Apr 2010 11:44:26 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "Kevin, thanks for your attention!\nI've read SlowQueryQuestions, but anyway can't find bottleneck...\n\nHere requested information:\nOS: Ubuntu 9.10 64bit, Postgresql 8.4.2 with Postgis\nHardware: AMD Phenom(tm) II X4 945, 8GB RAM, 2 SATA 750GB (pg db\ninstalled in software RAID 0)\nPlease also note that this hardware isn't dedicated DB server, but\nalso serve as web server and file server.\n\nI have about 3 million rows in core_object, 1.5 million in\nplugin_plugin_addr and 1.5 million in plugins_guide_address.\nWhen there were 300 000+ objects queries works perfectly, but as db\nenlarge things go worse...\n\n# select version();\nPostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real\n(Ubuntu 4.4.1-4ubuntu8) 4.4.1, 64-bit\n---postgresql.conf---\ndata_directory = '/mnt/fast/postgresql/8.4/main'\nhba_file = '/etc/postgresql/8.4/main/pg_hba.conf'\nident_file = '/etc/postgresql/8.4/main/pg_ident.conf'\nexternal_pid_file = '/var/run/postgresql/8.4-main.pid'\nlisten_addresses = 'localhost'\nport = 5432\nmax_connections = 250\nunix_socket_directory = '/var/run/postgresql'\nssl = true\nshared_buffers = 1024MB\ntemp_buffers = 16MB\nwork_mem = 128MB\nmaintenance_work_mem = 512MB\nfsync = off\nwal_buffers = 4MB\ncheckpoint_segments = 16\neffective_cache_size = 1536MB\nlog_min_duration_statement = 8000\nlog_line_prefix = '%t '\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\nstandard_conforming_strings = on\nescape_string_warning = off\nconstraint_exclusion = on\ncheckpoint_completion_target = 0.9\n---end postgresql.conf---\n\nI hope this help!\nAny ideas are appreciated!\n\n\nOn Apr 9, 12:44 am, [email protected] (\"Kevin Grittner\")\nwrote:\n>\n> Could you show us the output from \"select version();\", describe your\n> hardware and OS, and show us the contents of your postgresql.conf\n> file (with all comments removed)? We can then give more concrete\n> advice than is possible with the information provided so far.\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 8 Apr 2010 18:13:33 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "On Tue, Apr 6, 2010 at 8:42 PM, norn <[email protected]> wrote:\n> I have some mysterious slow downs with ORDER BY and LIMIT. When LIMIT\n> getting greater than some value (greater than 3 in my case), query\n> takes 4-5 secs instead of 0.25ms. All of the necessary indexes are in\n> place. I have no idea what to do, so any advices are welcome!\n>\n> Here my queries and explain analyzes;\n>\n> First Query with LIMIT 3 (fast)\n> -------------\n> explain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n> \"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n> \"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n> (\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\n> WHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n> \"core_object\".\"id\" DESC LIMIT 3;\n>\n> Limit (cost=0.00..9.57 rows=3 width=4) (actual time=0.090..0.138\n> rows=3 loops=1)\n> -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) (actual\n> time=0.088..0.136 rows=3 loops=1)\n> Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n> -> Nested Loop (cost=0.00..972804.02 rows=344125 width=4)\n> (actual time=0.056..0.095 rows=3 loops=1)\n> -> Index Scan Backward using\n> plugins_plugin_addr_oid_id on plugins_plugin_addr\n> (cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..0.032\n> rows=3 loops=1)\n> -> Index Scan using plugins_guide_address_pkey on\n> plugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\n> time=0.017..0.018 rows=1 loops=3)\n> Index Cond: (plugins_guide_address.id =\n> plugins_plugin_addr.address_id)\n> Filter: (plugins_guide_address.city_id = 4535)\n> -> Index Scan using core_object_pkey_desc on core_object\n> (cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.026..0.028\n> rows=3 loops=1)\n> Total runtime: 0.244 ms\n> (10 rows)\n>\n> Second Query, the same, but with LIMIT 4 (slooooow)\n> -------------\n> explain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n> \"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n> \"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n> (\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\n> WHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n> \"core_object\".\"id\" DESC LIMIT 4;\n>\n> Limit (cost=0.00..12.76 rows=4 width=4) (actual time=0.091..4436.795\n> rows=4 loops=1)\n> -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) (actual\n> time=0.089..4436.791 rows=4 loops=1)\n> Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n> -> Nested Loop (cost=0.00..972804.02 rows=344125 width=4)\n> (actual time=0.056..3988.249 rows=4 loops=1)\n> -> Index Scan Backward using\n> plugins_plugin_addr_oid_id on plugins_plugin_addr\n> (cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..329.942\n> rows=1244476 loops=1)\n> -> Index Scan using plugins_guide_address_pkey on\n> plugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\n> time=0.003..0.003 rows=0 loops=1244476)\n> Index Cond: (plugins_guide_address.id =\n> plugins_plugin_addr.address_id)\n> Filter: (plugins_guide_address.city_id = 4535)\n> -> Index Scan using core_object_pkey_desc on core_object\n> (cost=0.00..113516.08 rows=3091134 width=4) (actual\n> time=0.027..284.195 rows=1244479 loops=1)\n> Total runtime: 4436.894 ms\n> (10 rows)\n\nWhat do you get with no LIMIT at all?\n\n...Robert\n",
"msg_date": "Fri, 9 Apr 2010 18:48:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "On Apr 10, 6:48 am, [email protected] (Robert Haas) wrote:\n> On Tue, Apr 6, 2010 at 8:42 PM, norn <[email protected]> wrote:\n> > I have some mysterious slow downs with ORDER BY and LIMIT. When LIMIT\n> > getting greater than some value (greater than 3 in my case), query\n> > takes 4-5 secs instead of 0.25ms. All of the necessary indexes are in\n> > place. I have no idea what to do, so any advices are welcome!\n>\n> > Here my queries and explain analyzes;\n>\n> > First Query with LIMIT 3 (fast)\n> > -------------\n> > explain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n> > \"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n> > \"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n> > (\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\n> > WHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n> > \"core_object\".\"id\" DESC LIMIT 3;\n>\n> > Limit (cost=0.00..9.57 rows=3 width=4) (actual time=0.090..0.138\n> > rows=3 loops=1)\n> > -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) (actual\n> > time=0.088..0.136 rows=3 loops=1)\n> > Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n> > -> Nested Loop (cost=0.00..972804.02 rows=344125 width=4)\n> > (actual time=0.056..0.095 rows=3 loops=1)\n> > -> Index Scan Backward using\n> > plugins_plugin_addr_oid_id on plugins_plugin_addr\n> > (cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..0.032\n> > rows=3 loops=1)\n> > -> Index Scan using plugins_guide_address_pkey on\n> > plugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\n> > time=0.017..0.018 rows=1 loops=3)\n> > Index Cond: (plugins_guide_address.id =\n> > plugins_plugin_addr.address_id)\n> > Filter: (plugins_guide_address.city_id = 4535)\n> > -> Index Scan using core_object_pkey_desc on core_object\n> > (cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.026..0.028\n> > rows=3 loops=1)\n> > Total runtime: 0.244 ms\n> > (10 rows)\n>\n> > Second Query, the same, but with LIMIT 4 (slooooow)\n> > -------------\n> > explain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n> > \"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n> > \"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n> > (\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\n> > WHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n> > \"core_object\".\"id\" DESC LIMIT 4;\n>\n> > Limit (cost=0.00..12.76 rows=4 width=4) (actual time=0.091..4436.795\n> > rows=4 loops=1)\n> > -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) (actual\n> > time=0.089..4436.791 rows=4 loops=1)\n> > Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n> > -> Nested Loop (cost=0.00..972804.02 rows=344125 width=4)\n> > (actual time=0.056..3988.249 rows=4 loops=1)\n> > -> Index Scan Backward using\n> > plugins_plugin_addr_oid_id on plugins_plugin_addr\n> > (cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..329.942\n> > rows=1244476 loops=1)\n> > -> Index Scan using plugins_guide_address_pkey on\n> > plugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\n> > time=0.003..0.003 rows=0 loops=1244476)\n> > Index Cond: (plugins_guide_address.id =\n> > plugins_plugin_addr.address_id)\n> > Filter: (plugins_guide_address.city_id = 4535)\n> > -> Index Scan using core_object_pkey_desc on core_object\n> > (cost=0.00..113516.08 rows=3091134 width=4) (actual\n> > time=0.027..284.195 rows=1244479 loops=1)\n> > Total runtime: 4436.894 ms\n> > (10 rows)\n>\n> What do you get with no LIMIT at all?\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nWithout using limit query takes 5-6 seconds, but I have to get only a\ncouple of last rows with a cost of 200-300ms\n",
"msg_date": "Fri, 9 Apr 2010 21:41:55 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "1 ) Limit (cost=0.00..9.57 rows=3 width=4) (actual time=*0.090..0.138* \nrows=3 loops=1)\n2 ) Limit (cost=0.00..12.76 rows=4 width=4) (actual \ntime=*0.091..4436.795* rows=4 loops=1)\n1 ) -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) \n(actual time=*0.088..0.136* rows=*3* loops=1)\n2 ) -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4) \n(actual time=*0.089..4436.791* rows=*4* loops=1)\n1 ) Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n2 ) Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n1 ) -> Nested Loop (cost=0.00..972804.02 rows=344125 \nwidth=4) (actual time=*0.056..0.095* rows=*3* loops=1)\n2 ) -> Nested Loop (cost=0.00..972804.02 rows=344125 \nwidth=4) (actual time=*0.056..3988.249* rows=*4* loops=1)\n\n###################################################################################################################################################################################################\n1 ) -> Index Scan Backward using \nplugins_plugin_addr_oid_id on plugins_plugin_addr (cost=0.00..52043.06 \nrows=1621103 width=8) (actual time=_*0.027..0.032*_ rows=*3* loops=1)\n2 ) -> Index Scan Backward using \nplugins_plugin_addr_oid_id on plugins_plugin_addr (cost=0.00..52043.06 \nrows=1621103 width=8) (actual time=_*0.027..329.942*_ rows=*1244476* \nloops=1)\n\n1 ) -> Index Scan using plugins_guide_address_pkey \non plugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual \ntime=_*0.017..0.018*_ rows=*1* loops=*3*)\n2 ) -> Index Scan using plugins_guide_address_pkey \non plugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual \ntime=_*0.003..0.003*_ rows=*0* loops=*1244476*)\n###################################################################################################################################################################################################\n - I am not an expert in the matter but in the first query it took only \n3 loops to find 1 row and in the second it looped 1244476 times to find \nno row at all. Is it possible that there is no other row in the table \nthat match the data you are trying to retrieve?\n - Have you tried to recreate the index of the table? It could be that \nits damaged in some way that postgres can not use the index and its \nmaking a full search in the table. Again, it's just a wild guess.\n\n1 ) Index Cond: (plugins_guide_address.id = \nplugins_plugin_addr.address_id)\n2 ) Index Cond: (plugins_guide_address.id = \nplugins_plugin_addr.address_id)\n1 ) Filter: (plugins_guide_address.city_id = \n4535)\n2 ) Filter: (plugins_guide_address.city_id = \n4535)\n1 ) -> Index Scan using core_object_pkey_desc on \ncore_object (cost=0.00..113516.08 rows=3091134 width=4) (actual \ntime=*0.026..0.028* rows=*3* loops=1)\n2 ) -> Index Scan using core_object_pkey_desc on \ncore_object (cost=0.00..113516.08 rows=3091134 width=4) (actual \ntime=*0.027..284.195* rows=*1244479* loops=1)\n1 ) Total runtime: 0.244 ms\n2 ) Total runtime: 4436.894 ms\n\nRegards...\n\n--\nHelio Campos Mello de Andrade\n\n\n\n\n\n\n\n1 ) Limit \n(cost=0.00..9.57 rows=3 width=4) (actual time=0.090..0.138\nrows=3 loops=1)\n2 ) Limit (cost=0.00..12.76 rows=4 width=4) (actual time=0.091..4436.795\nrows=4 loops=1)\n1 ) -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4)\n(actual time=0.088..0.136 rows=3 loops=1)\n2 ) -> Merge Join (cost=0.00..1098182.56 rows=344125 width=4)\n(actual time=0.089..4436.791 rows=4 loops=1)\n1 ) Merge Cond: (plugins_plugin_addr.oid_id =\ncore_object.id)\n2 ) Merge Cond:\n(plugins_plugin_addr.oid_id = core_object.id)\n\n1 ) -> Nested Loop (cost=0.00..972804.02\nrows=344125 width=4) (actual time=0.056..0.095 rows=3\nloops=1)\n2 ) -> \nNested Loop (cost=0.00..972804.02 rows=344125 width=4) (actual time=0.056..3988.249\nrows=4 loops=1)\n\n\n###################################################################################################################################################################################################\n1 ) \n -> Index Scan Backward using plugins_plugin_addr_oid_id\non plugins_plugin_addr (cost=0.00..52043.06 rows=1621103 width=8)\n(actual time=0.027..0.032 rows=3 loops=1)\n2 ) \n -> Index Scan Backward using plugins_plugin_addr_oid_id\non plugins_plugin_addr (cost=0.00..52043.06 rows=1621103 width=8)\n(actual time=0.027..329.942 rows=1244476\nloops=1)\n\n1 ) -> \nIndex Scan using plugins_guide_address_pkey on plugins_guide_address \n(cost=0.00..0.56 rows=1 width=4) (actual time=0.017..0.018\nrows=1 loops=3)\n2 ) \n -> Index Scan using plugins_guide_address_pkey on\nplugins_guide_address (cost=0.00..0.56 rows=1 width=4) (actual\ntime=0.003..0.003 rows=0 loops=1244476)\n###################################################################################################################################################################################################\n - I am not an expert in the\nmatter but in the first query it took only 3 loops to find 1 row and in\nthe second it looped 1244476 times to find no row at all. Is it\npossible that there is no other row in the table that match the data\nyou are trying to retrieve?\n - Have you tried to recreate the index of the table? It could be that\nits damaged in some way that postgres can not use the index and its\nmaking a full search in the table. Again, it's just a wild guess.\n\n1 ) Index Cond: (plugins_guide_address.id =\nplugins_plugin_addr.address_id) \n2 ) \nIndex Cond: (plugins_guide_address.id = plugins_plugin_addr.address_id)\n\n1 ) Filter: (plugins_guide_address.city_id\n= 4535)\n2 ) \nFilter: (plugins_guide_address.city_id = 4535)\n\n1 ) -> Index Scan using core_object_pkey_desc on\ncore_object (cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.026..0.028\nrows=3 loops=1)\n2 ) \n -> Index Scan using core_object_pkey_desc on core_object\n(cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.027..284.195\nrows=1244479 loops=1)\n\n1 ) Total runtime: 0.244 ms\n2 ) Total runtime: 4436.894 ms\n\nRegards...\n\n--\nHelio Campos Mello de Andrade",
"msg_date": "Sun, 11 Apr 2010 23:43:57 -0300",
"msg_from": "Helio Campos Mello de Andrade <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
}
] |
[
{
"msg_contents": "Hi there,\n\nI have a function which returns setof record based on a specific query.\nI try to check the execution plan of that query, so I write EXPLAIN ANALYZE \nbefore my select, I call the function and I see the result which shows an \nactual time about 5 seconds. But when I call my function after I remove \nEXPLAIN ANALYZE it takes more than 300 seconds, and I cancel it.\n\nWhat's happen, or how can I see the function execution plan to optimize it ?\n\nTIA,\nSabin \n\n\n",
"msg_date": "Wed, 7 Apr 2010 15:47:54 +0300",
"msg_from": "\"Sabin Coanda\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How check execution plan of a function"
},
{
"msg_contents": "Maybe other details about the source of the problem help.\n\nThe problem occured when I tried to optimize the specified function. It was \nrunning in about 3 seconds, and I needed to be faster. I make some changes \nand I run the well known \"CREATE OR REPLACE FUNCTION ...\" After that, my \nfunction execution took so much time that I had to cancel it. I restored the \nprevious function body, but the problem persisted. I tried to drop it and \ncreate again, but no chance to restore the original performance. So I was \nforced to restore the database from backup.\n\nAlso I found the problem is reproductible, so a change of function will \ndamage its performance.\n\nCan anyone explain what is happen ? How can I found the problem ?\n\nTIA,\nSabin \n\n\n",
"msg_date": "Thu, 8 Apr 2010 09:29:07 +0300",
"msg_from": "\"Sabin Coanda\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How check execution plan of a function"
},
{
"msg_contents": "On Wed Apr 7 2010 7:47 AM, Sabin Coanda wrote:\n> Hi there,\n>\n> I have a function which returns setof record based on a specific query.\n> I try to check the execution plan of that query, so I write EXPLAIN ANALYZE\n> before my select, I call the function and I see the result which shows an\n> actual time about 5 seconds. But when I call my function after I remove\n> EXPLAIN ANALYZE it takes more than 300 seconds, and I cancel it.\n>\n> What's happen, or how can I see the function execution plan to optimize it ?\n>\n> TIA,\n> Sabin\n>\n>\n\nI ran into the same problems, what I did was enable the logging in \npostgresql.conf. I dont recall exactly what I enabled, but it was \nsomething like:\n\ntrack_functions = pl\n\nlog_statement_stats = on\nlog_duration = on\n\nThen in the serverlog you can see each statement, and how long it took. \n Once I found a statement that was slow I used explain analyze on just \nit so I could optimize that one statement.\n\n-Andy\n\n",
"msg_date": "Thu, 08 Apr 2010 10:44:09 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How check execution plan of a function"
},
{
"msg_contents": "I have just a function returning a cursor based on a single coplex query. \nWhen I check the execution plan of that query it takes about 3 seconds. Just \nwhen it is used inside the function it freezes.\n\nThis is the problem, and this is the reason I cannot imagine what is happen. \nAlso I tried to recreate the function as it was before when it run in 3 \nseconds, but I cannot make it to run properly now. \n\n\n",
"msg_date": "Fri, 9 Apr 2010 16:18:25 +0300",
"msg_from": "\"Sabin Coanda\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How check execution plan of a function"
},
{
"msg_contents": "On Fri Apr 9 2010 8:18 AM, Sabin Coanda wrote:\n> I have just a function returning a cursor based on a single coplex query.\n> When I check the execution plan of that query it takes about 3 seconds. Just\n> when it is used inside the function it freezes.\n>\n> This is the problem, and this is the reason I cannot imagine what is happen.\n> Also I tried to recreate the function as it was before when it run in 3\n> seconds, but I cannot make it to run properly now.\n>\n>\n\na query, like: \"select stuff from aTable where akey = 5\" can be \nplanned/prepared differently than a function containing: \"select stuff \nfrom aTable where akey = $1\". I'm guessing this is the problem you are \nrunning into. The planner has no information about $1, so cannot make \ngood guesses.\n\nI think you have two options:\n1) dont use a function, just fire off the sql.\n2) inside the function, create the query as a string, then execute it, like:\n\na := \"select junk from aTable where akey = 5\";\nEXECUE a;\n\n(I dont think that's the exact right syntax, but hopefully gets the idea \nacross)\n\n-Andy\n\n",
"msg_date": "Fri, 09 Apr 2010 08:52:00 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How check execution plan of a function"
},
{
"msg_contents": "\"Sabin Coanda\" <[email protected]> wrote:\n \n> I have just a function returning a cursor based on a single coplex\n> query. When I check the execution plan of that query it takes\n> about 3 seconds. Just when it is used inside the function it\n> freezes.\n> \n> This is the problem, and this is the reason I cannot imagine what\n> is happen. \n> \n> Also I tried to recreate the function as it was before when it run\n> in 3 seconds, but I cannot make it to run properly now. \n \nYou've given three somewhat confusing and apparently contradictory\ndescriptions of the problem or problems you've had with slow queries\n-- all on one thread. You would probably have better luck if you\nstart with one particular issue and provided more of the information\nsuggested here:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nIf we can solve one problem, perhaps the resolution to the others\nwill become apparent; otherwise follow up with the next.\n \n-Kevin\n",
"msg_date": "Mon, 12 Apr 2010 08:50:04 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How check execution plan of a function"
}
] |
[
{
"msg_contents": "Hi,\nI am using zabbix monitoring software. The backbone database for\nzabbix is postgresql 8.1 installed od linux.\n\nDatabase server has 3GB of RAM, 1 CPU Dual Core and 2 SAS disks in RAID 1.\n\nZabbix makes a lot of inserts and updates on database. The problem is\nthat when autovaccum starts the database freezes.\nI am trying to make better performance, I have read a lot of documents\nand sites about performance tunning but still no luck.\n\nMy current database variables:\n\n add_missing_from | off\n | Automatically adds missing table references to FROM\n clauses.\n archive_command | unset\n | WAL archiving command.\n australian_timezones | off\n | Interprets ACST, CST, EST, and SAT as Australian ti\nme zones.\n authentication_timeout | 60\n | Sets the maximum time in seconds to complete client\n authentication.\n autovacuum | on\n | Starts the autovacuum subprocess.\n autovacuum_analyze_scale_factor | 0.1\n | Number of tuple inserts, updates or deletes prior t\no analyze as a fraction of reltuples.\n autovacuum_analyze_threshold | 5000\n | Minimum number of tuple inserts, updates or deletes\n prior to analyze.\n autovacuum_naptime | 60\n | Time to sleep between autovacuum runs, in seconds.\n autovacuum_vacuum_cost_delay | -1\n | Vacuum cost delay in milliseconds, for autovacuum.\n autovacuum_vacuum_cost_limit | -1\n | Vacuum cost amount available before napping, for au\ntovacuum.\n autovacuum_vacuum_scale_factor | 0.2\n | Number of tuple updates or deletes prior to vacuum\nas a fraction of reltuples.\n autovacuum_vacuum_threshold | 100000\n | Minimum number of tuple updates or deletes prior to\n vacuum.\n backslash_quote | safe_encoding\n | Sets whether \"\\'\" is allowed in string literals.\n bgwriter_all_maxpages | 5\n | Background writer maximum number of all pages to fl\nush per round\n bgwriter_all_percent | 0.333\n | Background writer percentage of all buffers to flus\nh per round\n bgwriter_delay | 200\n | Background writer sleep time between rounds in mill\niseconds\n bgwriter_lru_maxpages | 5\n | Background writer maximum number of LRU pages to fl\nush per round\n bgwriter_lru_percent | 1\n | Background writer percentage of LRU buffers to flus\nh per round\n block_size | 8192\n | Shows size of a disk block\n bonjour_name | unset\n | Sets the Bonjour broadcast service name.\n check_function_bodies | on\n | Check function bodies during CREATE FUNCTION.\n checkpoint_segments | 32\n | Sets the maximum distance in log segments between a\nutomatic WAL checkpoints.\n checkpoint_timeout | 300\n | Sets the maximum time in seconds between automatic\nWAL checkpoints.\n checkpoint_warning | 30\n | Logs if filling of checkpoint segments happens more\n frequently than this (in seconds).\n client_encoding | UTF8\n | Sets the client's character set encoding.\n client_min_messages | notice\n | Sets the message levels that are sent to the client\n.\n commit_delay | 0\n | Sets the delay in microseconds between transaction\ncommit and flushing WAL to disk.\n commit_siblings | 5\n | Sets the minimum concurrent open transactions befor\ne performing commit_delay.\n config_file | /var/lib/pgsql/data/postgresql.conf\n | Sets the server's main configuration file.\n constraint_exclusion | off\n | Enables the planner to use constraints to optimize\nqueries.\n cpu_index_tuple_cost | 0.001\n | Sets the planner's estimate of processing cost for\neach index tuple (row) during index scan.\n cpu_operator_cost | 0.0025\n | Sets the planner's estimate of processing cost of e\nach operator in WHERE.\n cpu_tuple_cost | 0.01\n | Sets the planner's estimate of the cost of processi\nng each tuple (row).\n custom_variable_classes | unset\n | Sets the list of known custom variable classes.\n data_directory | /var/lib/pgsql/data\n | Sets the server's data directory.\n DateStyle | ISO, MDY\n | Sets the display format for date and time values.\n db_user_namespace | off\n | Enables per-database user names.\n deadlock_timeout | 1000\n | The time in milliseconds to wait on lock before che\ncking for deadlock.\n debug_pretty_print | off\n | Indents parse and plan tree displays.\n debug_print_parse | off\n | Prints the parse tree to the server log.\n debug_print_plan | off\n | Prints the execution plan to server log.\n debug_print_rewritten | off\n | Prints the parse tree after rewriting to server log\n.\n default_statistics_target | 100\n | Sets the default statistics target.\n default_tablespace | unset\n | Sets the default tablespace to create tables and in\ndexes in.\n default_transaction_isolation | read committed\n | Sets the transaction isolation level of each new tr\nansaction.\n default_transaction_read_only | off\n | Sets the default read-only status of new transactio\nns.\n default_with_oids | off\n | Create new tables with OIDs by default.\n dynamic_library_path | $libdir\n | Sets the path for dynamically loadable modules.\n effective_cache_size | 190000\n | Sets the planner's assumption about size of the dis\nk cache.\n enable_bitmapscan | on\n | Enables the planner's use of bitmap-scan plans.\n enable_hashagg | on\n | Enables the planner's use of hashed aggregation pla\nns.\n enable_hashjoin | on\n | Enables the planner's use of hash join plans.\n enable_indexscan | on\n | Enables the planner's use of index-scan plans.\n enable_mergejoin | on\n | Enables the planner's use of merge join plans.\n enable_nestloop | on\n | Enables the planner's use of nested-loop join plans\n.\n enable_seqscan | on\n | Enables the planner's use of sequential-scan plans.\n enable_sort | on\n | Enables the planner's use of explicit sort steps.\n enable_tidscan | on\n | Enables the planner's use of TID scan plans.\n escape_string_warning | off\n | Warn about backslash escapes in ordinary string lit\nerals.\n explain_pretty_print | on\n | Uses the indented output format for EXPLAIN VERBOSE\n.\n external_pid_file | unset\n | Writes the postmaster PID to the specified file.\n extra_float_digits | 0\n | Sets the number of digits displayed for floating-po\nint values.\n from_collapse_limit | 8\n | Sets the FROM-list size beyond which subqueries are\n not collapsed.\n fsync | on\n | Forces synchronization of updates to disk.\n full_page_writes | on\n | Writes full pages to WAL when first modified after\na checkpoint.\n geqo | on\n | Enables genetic query optimization.\n geqo_effort | 5\n | GEQO: effort is used to set the default for other G\nEQO parameters.\n geqo_generations | 0\n | GEQO: number of iterations of the algorithm.\n geqo_pool_size | 0\n | GEQO: number of individuals in the population.\n geqo_selection_bias | 2\n | GEQO: selective pressure within the population.\n geqo_threshold | 12\n | Sets the threshold of FROM items beyond which GEQO\nis used.\n hba_file | /var/lib/pgsql/data/pg_hba.conf\n | Sets the server's \"hba\" configuration file\n ident_file | /var/lib/pgsql/data/pg_ident.conf\n | Sets the server's \"ident\" configuration file\n integer_datetimes | off\n | Datetimes are integer based.\n join_collapse_limit | 8\n | Sets the FROM-list size beyond which JOIN construct\ns are not flattened.\n krb_caseins_users | off\n | Sets whether Kerberos user names should be treated\nas case-insensitive.\n krb_server_hostname | unset\n | Sets the hostname of the Kerberos server.\n krb_server_keyfile |\nFILE:/etc/sysconfig/pgsql/krb5.keytab | Sets the location of the\nKerberos server key file.\n krb_srvname | postgres\n | Sets the name of the Kerberos service.\n lc_collate | pl_PL.UTF-8\n | Shows the collation order locale.\n lc_ctype | pl_PL.UTF-8\n | Shows the character classification and case convers\nion locale.\n lc_messages | pl_PL.UTF-8\n | Sets the language in which messages are displayed.\n lc_monetary | pl_PL.UTF-8\n | Sets the locale for formatting monetary amounts.\n lc_numeric | pl_PL.UTF-8\n | Sets the locale for formatting numbers.\n lc_time | pl_PL.UTF-8\n | Sets the locale for formatting date and time values\n.\n listen_addresses | *\n | Sets the host name or IP address(es) to listen to.\n log_connections | off\n | Logs each successful connection.\n log_destination | stderr\n | Sets the destination for server log output.\n log_directory | pg_log\n | Sets the destination directory for log files.\n log_disconnections | off\n | Logs end of a session, including duration.\n log_duration | off\n | Logs the duration of each completed SQL statement.\n log_error_verbosity | default\n | Sets the verbosity of logged messages.\n log_executor_stats | off\n | Writes executor performance statistics to the serve\nr log.\n log_filename | postgresql-%a.log\n | Sets the file name pattern for log files.\n log_hostname | off\n | Logs the host name in the connection logs.\n log_line_prefix | unset\n | Controls information prefixed to each log line\n log_min_duration_statement | -1\n | Sets the minimum execution time in milliseconds abo\nve which statements will be logged.\n log_min_error_statement | panic\n | Causes all statements generating error at or above\nthis level to be logged.\n log_min_messages | notice\n | Sets the message levels that are logged.\n log_parser_stats | off\n | Writes parser performance statistics to the server\nlog.\n log_planner_stats | off\n | Writes planner performance statistics to the server\n log.\n log_rotation_age | 1440\n | Automatic log file rotation will occur after N minu\ntes\n log_rotation_size | 0\n | Automatic log file rotation will occur after N kilo\nbytes\n log_statement | none\n | Sets the type of statements logged.\n log_statement_stats | off\n | Writes cumulative performance statistics to the ser\nver log.\n log_truncate_on_rotation | on\n | Truncate existing log files of same name during log\n rotation.\n maintenance_work_mem | 256000\n | Sets the maximum memory to be used for maintenance\noperations.\n max_connections | 400\n | Sets the maximum number of concurrent connections.\n max_files_per_process | 1000\n | Sets the maximum number of simultaneously open file\ns for each server process.\n max_fsm_pages | 1000000\n | Sets the maximum number of disk pages for which fre\ne space is tracked.\n max_fsm_relations | 1000\n | Sets the maximum number of tables and indexes for w\nhich free space is tracked.\n max_function_args | 100\n | Shows the maximum number of function arguments.\n max_identifier_length | 63\n | Shows the maximum identifier length\n max_index_keys | 32\n | Shows the maximum number of index keys.\n max_locks_per_transaction | 64\n | Sets the maximum number of locks per transaction.\n max_prepared_transactions | 100\n | Sets the maximum number of simultaneously prepared\ntransactions.\n max_stack_depth | 10240\n | Sets the maximum stack depth, in kilobytes.\n password_encryption | off\n | Encrypt passwords.\n port | 5432\n | Sets the TCP port the server listens on.\n pre_auth_delay | 0\n | no description available\n preload_libraries | unset\n | Lists shared libraries to preload into server.\n random_page_cost | 3\n | Sets the planner's estimate of the cost of a nonseq\nuentially fetched disk page.\n redirect_stderr | on\n | Start a subprocess to capture stderr output into lo\ng files.\n regex_flavor | advanced\n | Sets the regular expression \"flavor\".\n search_path | $user,public\n | Sets the schema search order for names that are not\n schema-qualified.\n server_encoding | UTF8\n | Sets the server (database) character set encoding.\n server_version | 8.1.11\n | Shows the server version.\n shared_buffers | 95000\n | Sets the number of shared memory buffers used by th\ne server.\n silent_mode | off\n | Runs the server silently.\n sql_inheritance | on\n | Causes subtables to be included by default in vario\nus commands.\n ssl | off\n | Enables SSL connections.\n standard_conforming_strings | off\n | '...' strings treat backslashes literally.\n statement_timeout | 0\n | Sets the maximum allowed duration (in milliseconds)\n of any statement.\n stats_block_level | on\n | Collects block-level statistics on database activit\ny.\n stats_command_string | on\n | Collects statistics about executing commands.\n stats_reset_on_server_start | off\n | Zeroes collected statistics on server restart.\n stats_row_level | on\n | Collects row-level statistics on database activity.\n stats_start_collector | on\n | Starts the server statistics-collection subprocess.\n superuser_reserved_connections | 2\n | Sets the number of connection slots reserved for su\nperusers.\n syslog_facility | LOCAL0\n | Sets the syslog \"facility\" to be used when syslog e\nnabled.\n syslog_ident | postgres\n | Sets the program name used to identify PostgreSQL m\nessages in syslog.\n tcp_keepalives_count | 0\n | Maximum number of TCP keepalive retransmits.\n tcp_keepalives_idle | 0\n | Seconds between issuing TCP keepalives.\n tcp_keepalives_interval | 0\n | Seconds between TCP keepalive retransmits.\n temp_buffers | 1000\n | Sets the maximum number of temporary buffers used b\ny each session.\n TimeZone | Poland\n | Sets the time zone for displaying and interpreting\ntime stamps.\n trace_notify | off\n | Generates debugging output for LISTEN and NOTIFY.\n trace_sort | off\n | Emit information about resource usage in sorting.\n transaction_isolation | read committed\n | Sets the current transaction's isolation level.\n transaction_read_only | off\n | Sets the current transaction's read-only status.\n transform_null_equals | off\n | Treats \"expr=NULL\" as \"expr IS NULL\".\n unix_socket_directory | unset\n | Sets the directory where the Unix-domain socket wil\nl be created.\n unix_socket_group | unset\n | Sets the owning group of the Unix-domain socket.\n unix_socket_permissions | 511\n | Sets the access permissions of the Unix-domain sock\net.\n vacuum_cost_delay | 10\n | Vacuum cost delay in milliseconds.\n vacuum_cost_limit | 200\n | Vacuum cost amount available before napping.\n vacuum_cost_page_dirty | 20\n | Vacuum cost for a page dirtied by vacuum.\n vacuum_cost_page_hit | 1\n | Vacuum cost for a page found in the buffer cache.\n vacuum_cost_page_miss | 10\n | Vacuum cost for a page not found in the buffer cach\ne.\n wal_buffers | 2000\n | Sets the number of disk-page buffers in shared memo\nry for WAL.\n wal_sync_method | fdatasync\n | Selects the method used for forcing WAL updates out\n to disk.\n work_mem | 1600000\n | Sets the maximum memory to be used for query worksp\naces.\n zero_damaged_pages | off\n | Continues processing past damaged page headers.\n(163 rows)\n\nI would be very grateful for any help.\n\nGreetings for all.\n",
"msg_date": "Thu, 8 Apr 2010 11:23:17 +0200",
"msg_from": "Krzysztof Kardas <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "starting with 8.3, there's this new feature called HOT, which helps a lot\nwhen you do loads of updates.\nPlus writer is much quicker (30-40% sometimes), and autovacuum behaves much\nnicer.\nBottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a\nreason.\n\nstarting with 8.3, there's this new feature called HOT, which helps a lot when you do loads of updates. Plus writer is much quicker (30-40% sometimes), and autovacuum behaves much nicer. Bottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a reason.",
"msg_date": "Thu, 8 Apr 2010 10:31:34 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "2010/4/8 Grzegorz Jaśkiewicz <[email protected]>:\n> starting with 8.3, there's this new feature called HOT, which helps a lot\n> when you do loads of updates.\n> Plus writer is much quicker (30-40% sometimes), and autovacuum behaves much\n> nicer.\n> Bottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a\n> reason.\n\npostgresql 8.2: autovacuum enabled by default\npostgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of updates)\n\nprevious to 8.2, to get good performance on zabbix you need to\naggressively vacuum the heavily updated tables yourself.\n\nmerlin\n",
"msg_date": "Thu, 8 Apr 2010 09:16:02 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "2010/4/8 Merlin Moncure <[email protected]>:\n> previous to 8.2, to get good performance on zabbix you need to\n> aggressively vacuum the heavily updated tables yourself.\n\nGenerally if you DON'T vacuum aggressively enough, then vacuums will\ntake a really long and painful amount of time, perhaps accounting for\nthe \"hang\" the OP observed. There's really no help for it but to\nsweat it out once, and then do it frequently enough afterward that it\ndoesn't become a problem.\n\n...Robert\n",
"msg_date": "Thu, 8 Apr 2010 15:44:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "Kind of off-topic, but I've found that putting the history table on a separate spindle (using a separate tablespace) also helps improve performance.\n\n--Richard\n\n\n\nOn Apr 8, 2010, at 12:44 PM, Robert Haas wrote:\n\n> 2010/4/8 Merlin Moncure <[email protected]>:\n>> previous to 8.2, to get good performance on zabbix you need to\n>> aggressively vacuum the heavily updated tables yourself.\n> \n> Generally if you DON'T vacuum aggressively enough, then vacuums will\n> take a really long and painful amount of time, perhaps accounting for\n> the \"hang\" the OP observed. There's really no help for it but to\n> sweat it out once, and then do it frequently enough afterward that it\n> doesn't become a problem.\n> \n> ...Robert\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 8 Apr 2010 13:08:01 -0700",
"msg_from": "Richard Yen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "Krzysztof Kardas wrote:\n> My current database variables:\n> \n\nThat is way too much stuff to sort through. Try this instead, to only \nget the values you've set to something rather than every single one:\n\nselect name,unit,current_setting(name) from pg_settings where \nsource='configuration file' ;\n\nAlso, a snapshot of output from \"vmstat 1\" during some period when the \nserver is performing badly would be very helpful to narrow down what's \ngoing on.\n\nThe easy answer to your question is simply that autovacuum is terrible \non PG 8.1. You can tweak it to do better, but that topic isn't covered \nvery well in the sort of tuning guides you'll find floating around. \nThis is because most of the people who care about this sort of issue \nhave simply upgraded to a later version where autovacuum is much better.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 08 Apr 2010 22:31:40 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "Merlin Moncure wrote:\n> postgresql 8.2: autovacuum enabled by default\n> postgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of updates)\n> \n\nautovacuum wasn't enabled by default until 8.3. It didn't really work \nall that well out of the box until the support for multiple workers was \nadded in that version, along with some tweaking to its default \nparameters. There's also a lot more logging information available, both \nthe server logs and the statistics tables, to watch what it's doing that \nwere added in 8.3.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 09 Apr 2010 00:16:16 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "2010/4/9 Greg Smith <[email protected]>:\n> Merlin Moncure wrote:\n>>\n>> postgresql 8.2: autovacuum enabled by default\n>> postgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of\n>> updates)\n>>\n>\n> autovacuum wasn't enabled by default until 8.3. It didn't really work all\n> that well out of the box until the support for multiple workers was added in\n> that version, along with some tweaking to its default parameters. There's\n> also a lot more logging information available, both the server logs and the\n> statistics tables, to watch what it's doing that were added in 8.3.\n\nyou're right! iirc it was changed at the last minute...\n\nmerlin\n",
"msg_date": "Fri, 9 Apr 2010 11:06:07 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "The OP is using:\n\nautovacuum_vacuum_threshold | 100000\n\nThat means that vacuum won't consider a table to be 'vacuum-able' until\nafter 100k changes.... that's nowhere near aggressive enough. Probably\nwhat's happening is that when autovacuum finally DOES start on a table, it\njust takes forever.\n\n--Scott\n\n\n\n2010/4/9 Merlin Moncure <[email protected]>\n\n> 2010/4/9 Greg Smith <[email protected]>:\n> > Merlin Moncure wrote:\n> >>\n> >> postgresql 8.2: autovacuum enabled by default\n> >> postgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of\n> >> updates)\n> >>\n> >\n> > autovacuum wasn't enabled by default until 8.3. It didn't really work\n> all\n> > that well out of the box until the support for multiple workers was added\n> in\n> > that version, along with some tweaking to its default parameters.\n> There's\n> > also a lot more logging information available, both the server logs and\n> the\n> > statistics tables, to watch what it's doing that were added in 8.3.\n>\n> you're right! iirc it was changed at the last minute...\n>\n> merlin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe OP is using: autovacuum_vacuum_threshold | 100000\nThat means that vacuum won't consider a table to be 'vacuum-able' until after 100k changes.... that's nowhere near aggressive enough. Probably what's happening is that when autovacuum finally DOES start on a table, it just takes forever. \n--Scott\n\n2010/4/9 Merlin Moncure <[email protected]>\n2010/4/9 Greg Smith <[email protected]>:\n> Merlin Moncure wrote:\n>>\n>> postgresql 8.2: autovacuum enabled by default\n>> postgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of\n>> updates)\n>>\n>\n> autovacuum wasn't enabled by default until 8.3. It didn't really work all\n> that well out of the box until the support for multiple workers was added in\n> that version, along with some tweaking to its default parameters. There's\n> also a lot more logging information available, both the server logs and the\n> statistics tables, to watch what it's doing that were added in 8.3.\n\nyou're right! iirc it was changed at the last minute...\n\nmerlin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 9 Apr 2010 11:13:39 -0400",
"msg_from": "Scott Mead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "Off-list message that should have made it onto here, from Krzysztof:\n\nI have changed PostgreSQL to 8.3. I think that the database is really working faster. New settings:\n\n name | unit | current_setting\n---------------------------------+------+-------------------\n autovacuum | | on\n autovacuum_analyze_scale_factor | | 0.1\n autovacuum_analyze_threshold | | 5000\n autovacuum_freeze_max_age | | 200000000\n autovacuum_max_workers | | 3\n autovacuum_naptime | s | 1min\n autovacuum_vacuum_cost_delay | ms | 20ms\n autovacuum_vacuum_cost_limit | | -1\n autovacuum_vacuum_scale_factor | | 0.2\n autovacuum_vacuum_threshold | | 5000\n checkpoint_segments | | 32\n constraint_exclusion | | off\n deadlock_timeout | ms | 1min\n default_statistics_target | | 100\n from_collapse_limit | | 8\n join_collapse_limit | | 8\n log_autovacuum_min_duration | ms | 0\n maintenance_work_mem | kB | 256MB\n max_connections | | 400\n max_fsm_pages | | 2048000\n max_locks_per_transaction | | 64\n max_prepared_transactions | | 100\n max_stack_depth | kB | 20MB\n random_page_cost | | 4\n shared_buffers | 8kB | 760MB\n statement_timeout | ms | 0\n temp_buffers | 8kB | 32768\n vacuum_cost_delay | ms | 0\n vacuum_cost_limit | | 200\n vacuum_cost_page_dirty | | 20\n vacuum_cost_page_hit | | 1\n vacuum_cost_page_miss | | 10\n wal_buffers | 8kB | 16MB\n work_mem | kB | 1600MB\n\n\nI trimmed the above a bit to focus on the performance related \nparameters. Just doing the 8.3 upgrade has switched over to sane \nautovacuum settings now, which should improve things significantly.\n\nThe main problem with this configuration is that work_mem is set to an \nunsafe value--1.6GB. With potentially 400 connections and about 2GB of \nRAM free after starting the server, work_mem='4MB' is as large as you \ncan safely set this.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 09 Apr 2010 12:03:41 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "On Fri, Apr 9, 2010 at 10:03 AM, Greg Smith <[email protected]> wrote:\n\n> The main problem with this configuration is that work_mem is set to an\n> unsafe value--1.6GB. With potentially 400 connections and about 2GB of RAM\n> free after starting the server, work_mem='4MB' is as large as you can safely\n> set this.\n\n> maintenance_work_mem | kB | 256MB\n\nNote that 256MB maintenance_work_mem on a machine with 3 autovac\nthreads and only 2 Gig free is kinda high too.\n",
"msg_date": "Fri, 9 Apr 2010 10:28:40 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "On Fri, Apr 9, 2010 at 12:03 PM, Greg Smith <[email protected]> wrote:\n> The main problem with this configuration is that work_mem is set to an\n> unsafe value--1.6GB. With potentially 400 connections and about 2GB of RAM\n> free after starting the server, work_mem='4MB' is as large as you can safely\n> set this.\n\nif you need more work_mem for this or that and also need to serve a\nlot of connections, you can always set it locally (1.6GB is still too\nhigh though -- maybe 64mb if you need to do a big sort or something\nlike that).\n\nAnother path to take is to install pgbouncer, which at 400 connections\nis worth considering -- but only if your client stack doesn't use\ncertain features that require a private database session. zabbix will\n_probably_ work because it is db portable software (still should check\nhowever).\n\nmerlin\n",
"msg_date": "Fri, 9 Apr 2010 12:30:26 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "On Fri, Apr 9, 2010 at 10:30 AM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Apr 9, 2010 at 12:03 PM, Greg Smith <[email protected]> wrote:\n>> The main problem with this configuration is that work_mem is set to an\n>> unsafe value--1.6GB. With potentially 400 connections and about 2GB of RAM\n>> free after starting the server, work_mem='4MB' is as large as you can safely\n>> set this.\n>\n> if you need more work_mem for this or that and also need to serve a\n> lot of connections, you can always set it locally (1.6GB is still too\n> high though -- maybe 64mb if you need to do a big sort or something\n> like that).\n>\n> Another path to take is to install pgbouncer, which at 400 connections\n> is worth considering -- but only if your client stack doesn't use\n> certain features that require a private database session. zabbix will\n> _probably_ work because it is db portable software (still should check\n> however).\n\nAlso remember you can set it by user or by db, depending on your\nneeds. I had a server that had a reporting db and an app db. The app\ndb was set to 1 or 2 Meg work_mem, and the reporting db that had only\none or two threads ever run at once was set to 128Meg. Worked\nperfectly for what we needed.\n",
"msg_date": "Fri, 9 Apr 2010 10:44:59 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "2010/4/9 Scott Marlowe <[email protected]>:\n> On Fri, Apr 9, 2010 at 10:30 AM, Merlin Moncure <[email protected]> wrote:\n>> On Fri, Apr 9, 2010 at 12:03 PM, Greg Smith <[email protected]> wrote:\n>>> The main problem with this configuration is that work_mem is set to an\n>>> unsafe value--1.6GB. With potentially 400 connections and about 2GB of RAM\n>>> free after starting the server, work_mem='4MB' is as large as you can safely\n>>> set this.\n>>\n>> if you need more work_mem for this or that and also need to serve a\n>> lot of connections, you can always set it locally (1.6GB is still too\n>> high though -- maybe 64mb if you need to do a big sort or something\n>> like that).\n>>\n>> Another path to take is to install pgbouncer, which at 400 connections\n>> is worth considering -- but only if your client stack doesn't use\n>> certain features that require a private database session. zabbix will\n>> _probably_ work because it is db portable software (still should check\n>> however).\n>\n> Also remember you can set it by user or by db, depending on your\n> needs. I had a server that had a reporting db and an app db. The app\n> db was set to 1 or 2 Meg work_mem, and the reporting db that had only\n> one or two threads ever run at once was set to 128Meg. Worked\n> perfectly for what we needed.\n>\n\nThanks for all Your advices. I will set up new parameters on Monday\nmorning and see how it perform.\n\nGreetings for all PostgreSQL Team\n\n-- \nKrzysztof Kardas\n",
"msg_date": "Sat, 10 Apr 2010 20:54:04 +0200",
"msg_from": "Krzysztof Kardas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "<cut>\nHi all.\n\nWell I have used all Your recomendations but I still have no luck with\nperformance tunning. The machine has a moments thas was utilized in\n100%. The problem was I/O on disks. CPU's were busy on system\ninterrupts.\n\nI have started again to look of I/O performance tunning and I have changed a\n\nsynchronous_commit = off\n\nOfcourse with risk that if there will be a power failure I will lose\nsome data. But this is acceptable.\n\nThis caused a monumental performance jump. From a machine that is\nutilized on 100%, machine is now sleeping and doing nothing. I have\nexecuted some sqls on huge tables like history and all has executed\nlike lightning. Comparing to MySQL, PostgreSQL in this configuration\nis about 30 - 40% faster in serving data. Housekeeper is about 2 to 3\ntimes faster!!!!\n\nMany thanks to all helpers and all PostgreSQL team.\n\n-- \nGreeting\nKrzysztof Kardas\n",
"msg_date": "Wed, 14 Apr 2010 15:21:41 +0200",
"msg_from": "Krzysztof Kardas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "That really sounds like hardware issue. The I/O causes the system to freeze\nbasically.\nHappens sometimes on cheaper hardware.\n\nThat really sounds like hardware issue. The I/O causes the system to freeze basically. \nHappens sometimes on cheaper hardware.",
"msg_date": "Wed, 14 Apr 2010 14:30:46 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "Krzysztof Kardas <[email protected]> wrote:\n \n> synchronous_commit = off\n \n> This caused a monumental performance jump. From a machine that is\n> utilized on 100%, machine is now sleeping and doing nothing. I\n> have executed some sqls on huge tables like history and all has\n> executed like lightning. Comparing to MySQL, PostgreSQL in this\n> configuration is about 30 - 40% faster in serving data.\n> Housekeeper is about 2 to 3 times faster!!!!\n \nIf you have a good RAID controller with battery backup for the\ncache, and it's configured to write-back, this setting shouldn't\nmake very much difference. Next time you're looking at hardware for\na database server, I strongly recommend you get such a RAID\ncontroller and make sure it is configured to write-back.\n \nAnyway, I'm glad to hear that things are working well for you now!\n \n-Kevin\n",
"msg_date": "Wed, 14 Apr 2010 08:50:10 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
},
{
"msg_contents": "W dniu 14 kwietnia 2010 15:30 użytkownik Grzegorz Jaśkiewicz\n<[email protected]> napisał:\n> That really sounds like hardware issue. The I/O causes the system to freeze\n> basically.\n> Happens sometimes on cheaper hardware.\n>\n\nProbably You have right because this is HS21 Blade Server. And as You\nknow blades are cheap and good. Why blades are good - because they are\ncheap (quoting IBM salesman). I know this hardware is not made for\ndatabases but for now I do not have any other server. Firmware on this\ncurrent server is very old and it should be upgraded and there are\nmany other things to do. VMWare machines (currently I have ESX 3.5,\nvSphere 4 is based od 64bit RedHat5 system and is much faster that 3.5\nbut migration process is not even planned) has still to low\nperformance for database solutions (of course in using vmdk, not RAW\ndevice mapping or Virtual WWN solution for accessing LUN-s).\n\nAs more I am reading than more I see that the file system is wrong\npartitioned. For example - all logs and database files are on the same\nvolume, and that is not right.\nKevin Grittner also mentioned about write back function on the\ncontroller. LSI controllers for blades has that function as far as I\nknow. I have to check it if that option is turned on.\n\nAs I mentioned - I am not familiar with databases so I have made some\nmistakes but I am very happy for the effects how fast now Zabbix\nworks, and how easy PostgreSQL reclaims space. I think it was a good\ndecision and maybe I will try to interest some people in my company in\nPostgreSQL instate of Oracle XE.\n\nOnce more time - many thanks to all :)\n\n-- \nGreetings\nKrzysztof Kardas\n",
"msg_date": "Wed, 14 Apr 2010 21:02:01 +0200",
"msg_from": "Krzysztof Kardas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL with Zabbix - problem of newbe"
}
] |
[
{
"msg_contents": "Hi,\nCan anyone suggest why this query so slow.\n\nSELECT version();\n version\n--------------------------------------------------------------------------------------------------------- \n\nPostgreSQL 8.4.2 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD], 32-bit\n(1 row)\nexplain analyze SELECT DT.value,\n DT.meassure_date,\n DT.ms_status_id as status_id,\n S.descr_bg as status_bg,\n S.descr_en as status_en,\n VT.id as value_type_id,\n VT.descr_en as value_type_en,\n VT.descr_bg as value_type_bg,\n T.unit as value_type_unit,\n T.name as general_value_type,\n T.ms_db_type_id\n FROM\n ms_data AS DT,\n ms_statuses AS S,\n ms_value_types AS VT,\n ms_types AS T,\n ms_commands_history AS CH\n WHERE DT.ms_value_type_id = 88 AND\n DT.meassure_date >= '2010-04-01 1:00:00' AND\n DT.meassure_date <= '2010-04-01 1:10:00' AND\n DT.ms_command_history_id = CH.id AND\n CH.ms_device_id = 7 AND\n DT.ms_value_type_id = VT.id AND\n VT.ms_type_id = T.id AND\n DT.ms_status_id = S.id\n GROUP BY value,\n meassure_date,\n status_id,\n status_bg,\n status_en,\n value_type_id,\n value_type_en,\n value_type_bg,\n value_type_unit,\n general_value_type,\n ms_db_type_id\n ORDER BY meassure_date DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nGroup (cost=23.93..23.96 rows=1 width=229) (actual \ntime=63274.021..63274.021 rows=0 \nloops=1) \n-> Sort (cost=23.93..23.94 rows=1 width=229) (actual \ntime=63274.016..63274.016 rows=0 loops=1)\n Sort Key: dt.meassure_date, dt.value, dt.ms_status_id, s.descr_bg, \ns.descr_en, vt.id, vt.descr_en, vt.descr_bg, t.unit, t.name, \nt.ms_db_type_id \n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=0.00..23.92 rows=1 width=229) (actual \ntime=63273.982..63273.982 rows=0 \nloops=1) \n -> Nested Loop (cost=0.00..19.64 rows=1 width=165) (actual \ntime=63273.977..63273.977 rows=0 \nloops=1) \n -> Nested Loop (cost=0.00..15.36 rows=1 width=101) \n(actual time=63273.974..63273.974 rows=0 \nloops=1) \n -> Nested Loop (cost=0.00..11.08 rows=1 width=23) \n(actual time=63273.970..63273.970 rows=0 \nloops=1) \n -> Index Scan using \nms_commands_history_ms_device_id_idx on ms_commands_history ch \n(cost=0.00..4.33 rows=1 width=8) (actual time=0.163..25254.004 rows=9807 \nloops=1)\n Index Cond: (ms_device_id = 7)\n -> Index Scan using \nms_data_ms_command_history_id_idx on ms_data dt (cost=0.00..6.74 rows=1 \nwidth=31) (actual time=3.868..3.868 rows=0 loops=9807)\n Index Cond: \n(dt.ms_command_history_id = ch.id)\n Filter: ((dt.meassure_date >= \n'2010-04-01 01:00:00'::timestamp without time zone) AND \n(dt.meassure_date <= '2010-04-01 01:10:00'::timestamp without time zone) \nAND (dt.ms_value_type_id = 88))\n -> Index Scan using ms_value_types_pkey on \nms_value_types vt (cost=0.00..4.27 rows=1 width=82) (never \nexecuted) Index Cond: \n(vt.id = 88)\n -> Index Scan using ms_types_pkey on ms_types t \n(cost=0.00..4.27 rows=1 width=72) (never \nexecuted) \nIndex Cond: (t.id = vt.ms_type_id)\n -> Index Scan using ms_statuses_pkey on ms_statuses s \n(cost=0.00..4.27 rows=1 width=68) (never \nexecuted) Index \nCond: (s.id = dt.ms_status_id)\nTotal runtime: 63274.256 ms \n\nThanks in advance.\n\n Kaloyan Iliev\n",
"msg_date": "Thu, 08 Apr 2010 15:58:27 +0300",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Optimization"
},
{
"msg_contents": "Sorry for the noise.\nI make vacuum analyze and the problem is solved.\nKaloyan Iliev\n\nKaloyan Iliev Iliev wrote:\n> Hi,\n> Can anyone suggest why this query so slow.\n>\n> SELECT version();\n> version\n> --------------------------------------------------------------------------------------------------------- \n>\n> PostgreSQL 8.4.2 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC) \n> 4.2.1 20070719 [FreeBSD], 32-bit\n> (1 row)\n> explain analyze SELECT DT.value,\n> DT.meassure_date,\n> DT.ms_status_id as status_id,\n> S.descr_bg as status_bg,\n> S.descr_en as status_en,\n> VT.id as value_type_id,\n> VT.descr_en as value_type_en,\n> VT.descr_bg as value_type_bg,\n> T.unit as value_type_unit,\n> T.name as general_value_type,\n> T.ms_db_type_id\n> FROM\n> ms_data AS DT,\n> ms_statuses AS S,\n> ms_value_types AS VT,\n> ms_types AS T,\n> ms_commands_history AS CH\n> WHERE DT.ms_value_type_id = 88 AND\n> DT.meassure_date >= '2010-04-01 1:00:00' AND\n> DT.meassure_date <= '2010-04-01 1:10:00' AND\n> DT.ms_command_history_id = CH.id AND\n> CH.ms_device_id = 7 AND\n> DT.ms_value_type_id = VT.id AND\n> VT.ms_type_id = T.id AND\n> DT.ms_status_id = S.id\n> GROUP BY value,\n> meassure_date,\n> status_id,\n> status_bg,\n> status_en,\n> value_type_id,\n> value_type_en,\n> value_type_bg,\n> value_type_unit,\n> general_value_type,\n> ms_db_type_id\n> ORDER BY meassure_date DESC;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n>\n> Group (cost=23.93..23.96 rows=1 width=229) (actual \n> time=63274.021..63274.021 rows=0 \n> loops=1) \n> -> Sort (cost=23.93..23.94 rows=1 width=229) (actual \n> time=63274.016..63274.016 rows=0 loops=1)\n> Sort Key: dt.meassure_date, dt.value, dt.ms_status_id, s.descr_bg, \n> s.descr_en, vt.id, vt.descr_en, vt.descr_bg, t.unit, t.name, \n> t.ms_db_type_id Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=0.00..23.92 rows=1 width=229) (actual \n> time=63273.982..63273.982 rows=0 \n> loops=1) \n> -> Nested Loop (cost=0.00..19.64 rows=1 width=165) \n> (actual time=63273.977..63273.977 rows=0 \n> loops=1) \n> -> Nested Loop (cost=0.00..15.36 rows=1 width=101) \n> (actual time=63273.974..63273.974 rows=0 \n> loops=1) \n> -> Nested Loop (cost=0.00..11.08 rows=1 width=23) \n> (actual time=63273.970..63273.970 rows=0 \n> loops=1) \n> -> Index Scan using \n> ms_commands_history_ms_device_id_idx on ms_commands_history ch \n> (cost=0.00..4.33 rows=1 width=8) (actual time=0.163..25254.004 \n> rows=9807 loops=1)\n> Index Cond: (ms_device_id = 7)\n> -> Index Scan using \n> ms_data_ms_command_history_id_idx on ms_data dt (cost=0.00..6.74 \n> rows=1 width=31) (actual time=3.868..3.868 rows=0 loops=9807)\n> Index Cond: \n> (dt.ms_command_history_id = ch.id)\n> Filter: ((dt.meassure_date >= \n> '2010-04-01 01:00:00'::timestamp without time zone) AND \n> (dt.meassure_date <= '2010-04-01 01:10:00'::timestamp without time \n> zone) AND (dt.ms_value_type_id = 88))\n> -> Index Scan using ms_value_types_pkey on \n> ms_value_types vt (cost=0.00..4.27 rows=1 width=82) (never \n> executed) Index Cond: \n> (vt.id = 88)\n> -> Index Scan using ms_types_pkey on ms_types t \n> (cost=0.00..4.27 rows=1 width=72) (never \n> executed) \n> Index Cond: (t.id = vt.ms_type_id)\n> -> Index Scan using ms_statuses_pkey on ms_statuses s \n> (cost=0.00..4.27 rows=1 width=68) (never \n> executed) Index \n> Cond: (s.id = dt.ms_status_id)\n> Total runtime: 63274.256 ms \n> Thanks in advance.\n>\n> Kaloyan Iliev\n>\n",
"msg_date": "Thu, 08 Apr 2010 16:41:16 +0300",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Optimization"
},
{
"msg_contents": "In response to Kaloyan Iliev Iliev :\n> Hi,\n> Can anyone suggest why this query so slow.\n\n\n> -> Index Scan using \n> ms_commands_history_ms_device_id_idx on ms_commands_history ch \n> (cost=0.00..4.33 rows=1 width=8) (actual time=0.163..25254.004 rows=9807 \n> loops=1)\n\nEstimated rows: 1, actual rows: 9807, that's a BIG difference and,\nmaybe, your problem.\n\nBtw.: your explain is hard to read (line-wrapping). It's better to\nattach the explain as an own file...\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 8 Apr 2010 16:03:29 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Optimization"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nAnybody have the test case of “ context-switching issue on Xeon” from Tm lane ?\n\n \n\nBest regards,\n\nRay Huang\n\n \n\n\n\n\n\n\n\n\n\n\n \nHi,\n \nAnybody have the test case of “ context-switching issue on Xeon” from\n Tm lane ?\n \nBest regards,\nRay Huang",
"msg_date": "Fri, 9 Apr 2010 15:23:02 +0800",
"msg_from": "=?gb2312?B?UkS7xtPAzsA=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?gb2312?B?QWJvdXQgobBjb250ZXh0LXN3aXRjaGluZyBpc3N1ZSBvbiBYZW9uobEgdGU=?=\n\t=?gb2312?B?c3QgY2FzZSCjvyA=?="
},
{
"msg_contents": "RD锟斤拷锟斤拷锟斤拷 wrote:\n>\n> Anybody have the test case of 锟斤拷 context-switching issue on Xeon锟斤拷 from\n> Tm lane 锟斤拷\n>\n\nThat takes me back:\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n\nThat's a problem seen on 2004 era Xeon processors, and with PostgreSQL\n7.4. I doubt it has much relevance nowadays, given a) that whole area of\nthe code was rewritten for PostgreSQL 8.1, and b) today's Xeons are\nnothing like 2004's Xeons.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Sat, 10 Apr 2010 00:02:35 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About =?GB2312?B?obBjb250ZXh0LXN3aXRjaGluZyBpc3N1ZQ==?=\n\t=?GB2312?B?IG9uIFhlb26hsSB0ZXN0IGNhc2Ugo78g?="
},
{
"msg_contents": "2010/4/9 Greg Smith <[email protected]>:\n> RD黄永卫 wrote:\n>>\n>> Anybody have the test case of “ context-switching issue on Xeon” from\n>> Tm lane ?\n>>\n>\n> That takes me back:\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n>\n> That's a problem seen on 2004 era Xeon processors, and with PostgreSQL\n> 7.4. I doubt it has much relevance nowadays, given a) that whole area of\n> the code was rewritten for PostgreSQL 8.1, and b) today's Xeons are\n> nothing like 2004's Xeons.\n\nIt's important to appreciate that all improvements in scalability for\nxeons, opterons, and everything else has mostly just moved further\nalong to the right on the graph where you start doing more context\nswitching than work, and the performance falls off. The same way that\n(sometimes) throwing more cores at a problem can help. For most\noffice sized pgsql servers there's still a real possibility of having\na machine getting slammed and one of the indicators of that is that\ncontext switches per second will start to jump up and the machine gets\nsluggish.\n\nFor 2 sockets Intel rules the roost. I'd imagine AMD's much faster\nbus architecture for >2 sockets would make them the winner, but I\nhaven't had a system like that to test, either Intel or AMD.\n",
"msg_date": "Fri, 9 Apr 2010 23:05:00 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?GB2312?Q?Re=3A_=5BPERFORM=5D_About_=A1=B0context=2Dswitching_issue_on_Xe?=\n\t=?GB2312?Q?on=A1=B1_test_case_=A3=BF?="
},
{
"msg_contents": "2010/4/9 Greg Smith <[email protected]>:\n> RD黄永卫 wrote:\n>>\n>> Anybody have the test case of “ context-switching issue on Xeon” from\n>> Tm lane ?\n>>\n>\n> That takes me back:\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n>\n> That's a problem seen on 2004 era Xeon processors, and with PostgreSQL\n> 7.4. I doubt it has much relevance nowadays, given a) that whole area of\n> the code was rewritten for PostgreSQL 8.1, and b) today's Xeons are\n> nothing like 2004's Xeons.\n\nNote that I found this comment by some guy named Greg from almost a\nyear ago that I thought was relevant. (I'd recommend reading the\nwhole thread.)\n\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00097.php\n",
"msg_date": "Fri, 9 Apr 2010 23:07:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?GB2312?Q?Re=3A_=5BPERFORM=5D_About_=A1=B0context=2Dswitching_issue_on_Xe?=\n\t=?GB2312?Q?on=A1=B1_test_case_=A3=BF?="
},
{
"msg_contents": "Scott Marlowe wrote:\n> For 2 sockets Intel rules the roost. I'd imagine AMD's much faster\n> bus architecture for >2 sockets would make them the winner, but I\n> haven't had a system like that to test, either Intel or AMD.\n> \n\nAMD has been getting such poor performance due to the RAM they've been\nusing (DDR2-800) that it really doesn't matter--Intel has been thrashing\nthem across the board continuously since the \"Nehalem\" processors became\navailable, which started in volume in 2009. Intel systems with 3\nchannels of DDR3-1066 or faster outperform any scale of AMD deployment\non DDR2, and nowadays even Intel's chapter desktop processors have 2\nchannels of DDR3-1600 in them.\n\nThat's been the situation for almost 18 months now anyway. AMD's new\n\"Magny-Cours\" Opterons have finally adopted DDR3-1333, and closed the\nmain performance gap with Intel again. Recently I've found the \"Oracle\nCalling Circle\" benchmarking numbers that Anand runs seem to match what\nI see in terms CPU-bound PostgreSQL database workloads, and the latest\nset at\nhttp://it.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/8\nshow how the market now fits together. AMD had a clear lead when it was\nXeon E5450 vs. Opteron 2389, Intel pulled way ahead with the X5570 and\nlater processors. Only this month did the Opteron 6174 finally become\ncompetitive again. They're back to being only a little slower at two\nsockets, instead of not even close. A 4 socket version of the latest\nOpterons with DDR3 might even unseat Intel on some workloads, it's at\nleast possible again.\n\nAnyway, returning to \"context switching on Xeon\", there were some\nspecific issues with the older PostgreSQL code that conflicted badly\nwith the Xeons of the time, and the test case Tom put together was good\nat inflicting the issue. There certainly are still potential ways to\nhave the current processors and database code run into context switching\nissues. I wouldn't expect that particular test case would be the best\nway to go looking for them though, which is the reason I highlighted its\nage and general obsolescence.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Sat, 10 Apr 2010 22:22:50 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About =?GB2312?B?obBjb250ZXh0LXN3aXRjaGluZyBpc3N1ZQ==?=\n\t=?GB2312?B?IG9uIFhlb26hsSB0ZXN0IGNhc2Ugo78=?="
},
{
"msg_contents": "2010/4/10 Greg Smith <[email protected]>:\n> Scott Marlowe wrote:\n>> For 2 sockets Intel rules the roost. I'd imagine AMD's much faster\n>> bus architecture for >2 sockets would make them the winner, but I\n>> haven't had a system like that to test, either Intel or AMD.\n>>\n>\n> AMD has been getting such poor performance due to the RAM they've been\n> using (DDR2-800) that it really doesn't matter--Intel has been thrashing\n> them across the board continuously since the \"Nehalem\" processors became\n> available, which started in volume in 2009.\n\nConsidering the nehalems are only available (or were at least) in two\nsocket varieties, and that opterons have two channels per socket\nwouldn't the aggregate performance of 8 sockets x 2 channels each beat\nthe 2 sockets / 3 channels each?\n",
"msg_date": "Sun, 11 Apr 2010 09:43:17 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?GB2312?Q?Re=3A_=5BPERFORM=5D_About_=A1=B0context=2Dswitching_issue_on_Xe?=\n\t=?GB2312?Q?on=A1=B1_test_case_=A3=BF?="
},
{
"msg_contents": "Thank you for you reply!\n\n \n\n “one of the indicators of that is that context switches per second will start to jump up and the machine gets\n\nSluggish”\n\n \n\n--> Here is my database server indicator: \n\n \n\nThese is ther VMSTAT log of my database server as below:\n\n \n\n2010-04-07 04:03:15 procs memory swap io system cpu\n\n2010-04-07 04:03:15 r b swpd free buff cache si so bi bo in cs us sy id wa\n\n2010-04-07 14:04:27 3 0 0 2361272 272684 3096148 0 0 3 1445 973 14230 7 8 84 0\n\n2010-04-07 14:05:27 2 0 0 2361092 272684 3096220 0 0 3 1804 1029 31852 8 10 81 1\n\n2010-04-07 14:06:27 1 0 0 2362236 272684 3096564 0 0 3 1865 1135 19689 9 9 81 0\n\n2010-04-07 14:07:27 1 0 0 2348400 272720 3101836 0 0 3 1582 1182 149461 15 17 67 0\n\n2010-04-07 14:08:27 3 0 0 2392028 272840 3107600 0 0 3 3093 1275 203196 24 23 53 1\n\n2010-04-07 14:09:27 3 1 0 2386224 272916 3107960 0 0 3 2486 1331 193299 26 22 52 0\n\n2010-04-07 14:10:27 34 0 0 2332320 272980 3107944 0 0 3 1692 1082 214309 24 22 54 0\n\n2010-04-07 14:11:27 1 0 0 2407432 273028 3108092 0 0 6 2770 1540 76643 29 13 57 1\n\n2010-04-07 14:12:27 9 0 0 2358968 273104 3108388 0 0 7 2639 1466 10603 22 6 72 1\n\n \n\n My postgres version: 8.1.3; \n\nMy OS version: Linux version 2.4.21-47.Elsmp((Red Hat Linux 3.2.3-54)\n\nMy CPU:\n\nprocessor : 7\n\nvendor_id : GenuineIntel\n\ncpu family : 15\n\nmodel : 6\n\nmodel name : Intel(R) Xeon(TM) CPU 3.40GHz\n\nstepping : 8\n\ncpu MHz : 3400.262\n\ncache size : 1024 KB\n\nphysical id : 1\n\n \n\n \n\nI donnt know what make the “context-switching” storm ? \n\nHow should I investigate <javascript:;> the real reason ?\n\nCould you please give me some advice ? \n\n \n\nThank you !\n\n \n\n-----邮件原件-----\n发件人: Scott Marlowe [mailto:[email protected]] \n发送时间: 2010年4月10日 13:05\n收件人: Greg Smith\n抄送: RD黄永卫; [email protected]\n主题: Re: [PERFORM] About “context-switching issue on Xeon” test case ?\n\n \n\n2010/4/9 Greg Smith <[email protected]>:\n\n> RD黄永卫 wrote:\n\n>> \n\n>> Anybody have the test case of “ context-switching issue on Xeon” from\n\n>> Tm lane ?\n\n>> \n\n> \n\n> That takes me back:\n\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n\n> \n\n> That's a problem seen on 2004 era Xeon processors, and with PostgreSQL\n\n> 7.4. I doubt it has much relevance nowadays, given a) that whole area of\n\n> the code was rewritten for PostgreSQL 8.1, and b) today's Xeons are\n\n> nothing like 2004's Xeons.\n\n \n\nIt's important to appreciate that all improvements in scalability for\n\nxeons, opterons, and everything else has mostly just moved further\n\nalong to the right on the graph where you start doing more context\n\nswitching than work, and the performance falls off. The same way that\n\n(sometimes) throwing more cores at a problem can help. For most\n\noffice sized pgsql servers there's still a real possibility of having\n\na machine getting slammed and one of the indicators of that is that\n\ncontext switches per second will start to jump up and the machine gets\n\nsluggish.\n\n \n\nFor 2 sockets Intel rules the roost. I'd imagine AMD's much faster\n\nbus architecture for >2 sockets would make them the winner, but I\n\nhaven't had a system like that to test, either Intel or AMD.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n Thank you for you reply!\n \n “one of the indicators of that is that context switches per second\nwill start to jump up and the machine gets\nSluggish”\n \n--> Here is my database server indicator: \n \nThese is ther VMSTAT log\nof my database server as below:\n \n2010-04-07 04:03:15\nprocs \nmemory \nswap io system \ncpu\n2010-04-07 04:03:15 r b \nswpd free buff cache si \nso bi bo in \ncs us sy id wa\n2010-04-07 14:04:27 3 \n0 0 2361272 272684 3096148 \n0 0 3 1445 973\n14230 7 8 84 0\n2010-04-07 14:05:27 2 \n0 0 2361092 272684 3096220 \n0 0 3 1804 1029 31852 8\n10 81 1\n2010-04-07 14:06:27 1 \n0 0 2362236 272684 3096564 \n0 0 3 1865 1135 19689 \n9 9 81 0\n2010-04-07 14:07:27 1 \n0 0 2348400 272720 3101836 \n0 0 3 1582 1182 149461 15 17\n67 0\n2010-04-07 14:08:27 3 \n0 0 2392028 272840 3107600 \n0 0 3 3093 1275 203196 24 23\n53 1\n2010-04-07 14:09:27 3 \n1 0 2386224 272916 3107960 \n0 0 3 2486 1331 193299 26 22\n52 0\n2010-04-07 14:10:27 34 \n0 0 2332320 272980 3107944 \n0 0 3 1692 1082 214309 24 22\n54 0\n2010-04-07 14:11:27 1 \n0 0 2407432 273028 3108092 \n0 0 6 2770 1540 76643 29 13\n57 1\n2010-04-07 14:12:27 9 \n0 0 2358968 273104 3108388 \n0 0 7 2639 1466 10603 22 \n6 72 1\n \n My postgres version: 8.1.3;\n\nMy OS version: Linux version 2.4.21-47.Elsmp((Red\nHat Linux 3.2.3-54)\nMy CPU:\nprocessor \n: 7\nvendor_id \n: GenuineIntel\ncpu\nfamily : 15\nmodel \n: 6\nmodel\nname : Intel(R) Xeon(TM) CPU 3.40GHz\nstepping \n: 8\ncpu\nMHz : 3400.262\ncache\nsize : 1024 KB\nphysical\nid : 1\n \n \nI donnt know what make the “context-switching” storm ? \nHow should I investigate the real reason ?\nCould you please give me some\nadvice ? \n \nThank you !\n \n-----邮件原件-----\n发件人: Scott Marlowe [mailto:[email protected]] \n发送时间: 2010年4月10日 13:05\n收件人: Greg Smith\n抄送: RD黄永卫;\[email protected]\n主题: Re: [PERFORM] About “context-switching\nissue on Xeon” test case ?\n \n2010/4/9 Greg Smith <[email protected]>:\n> RD黄永卫 wrote:\n>> \n>> Anybody have the test case of “ context-switching issue on Xeon” from\n>> Tm lane ?\n>> \n> \n> That takes me back:\n>\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n> \n> That's a problem seen on 2004 era Xeon\nprocessors, and with PostgreSQL\n> 7.4. I doubt it has much relevance nowadays,\ngiven a) that whole area of\n> the code was rewritten for PostgreSQL 8.1, and b)\ntoday's Xeons are\n> nothing like 2004's Xeons.\n \nIt's important to appreciate that all improvements in\nscalability for\nxeons, opterons, and everything else has mostly just\nmoved further\nalong to the right on the graph where you start doing\nmore context\nswitching than work, and the performance falls\noff. The same way that\n(sometimes) throwing more cores at a problem can\nhelp. For most\noffice sized pgsql servers there's still a real\npossibility of having\na machine getting slammed and one of the indicators of\nthat is that\ncontext switches per second will start to jump up and\nthe machine gets\nsluggish.\n \nFor 2 sockets Intel rules the roost. I'd imagine\nAMD's much faster\nbus architecture for >2 sockets would make them the\nwinner, but I\nhaven't had a system like that to test, either Intel\nor AMD.",
"msg_date": "Mon, 12 Apr 2010 09:10:28 +0800",
"msg_from": "=?gb2312?B?UkS7xtPAzsA=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?gb2312?B?tPC4tDogW1BFUkZPUk1dIEFib3V0IKGwY29udGV4dC1zd2l0Y2hpbg==?=\n\t=?gb2312?B?ZyBpc3N1ZSBvbiBYZW9uobEgdGVzdCBjYXNlIKO/?="
}
] |
[
{
"msg_contents": "I saw this in the postgres log. Anyone know what would cause this?\nThanks, Brian\n\npostgres 8.3.5 on RHEL4 update 6\n\n[3358-cemdb-admin-2010-04-09 04:00:19.029 PDT]ERROR: could not open \nrelation with OID 170592\n[3358-cemdb-admin-2010-04-09 04:00:19.029 PDT]STATEMENT: select \nlm.ts_login_name,sm.ts_session_id from ts_user_logins_map lm join \nts_user_sessions_map sm on sm.ts_user_id=lm.ts_user_id where not \nsm.ts_soft_delete and not lm.ts_soft_delete and lm.ts_user_id != 1 and \nlm.ts_app_id in (600000000000000001) order by sm.ts_creation_time\n",
"msg_date": "Fri, 09 Apr 2010 12:43:19 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"could not open relation...\""
},
{
"msg_contents": "Brian Cox <[email protected]> writes:\n> I saw this in the postgres log. Anyone know what would cause this?\n> Thanks, Brian\n\n> postgres 8.3.5 on RHEL4 update 6\n\n> [3358-cemdb-admin-2010-04-09 04:00:19.029 PDT]ERROR: could not open \n> relation with OID 170592\n\nSeems a bit off-topic for pgsql-performance, but anyway: the main\nknown cause for that is if one of the tables used in the query got\ndropped (by another session) just after the query started. Could\nthat have happened to you?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 10 Apr 2010 00:29:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"could not open relation...\" "
}
] |
[
{
"msg_contents": "I've got a sql language function which does a fairly simple select from a table. If I give it one value, it performs quickly (half a ms). If I give it another value, it does not (1.1 seconds). When I run the equivalent select outside of the function, both values perform roughly the same (even though one value returns 140k more rows, as expected). \n\nMy understanding is that this generally happens because the plan should be different for the different values, but the first time the function is run it caches the plan for one of the values and will never use the appropriate plan for the second value. However, when I do an explain analyze of the raw sql for both values, I get the same plan. So my understanding must be wrong?\n\nI suppose the other possibility is that the slower value is slower in a function because it's returning 140k more rows and the function has to deal with that additional data..... but that seems far-fetched, given that each row is just an int.",
"msg_date": "Sat, 10 Apr 2010 13:47:58 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": true,
"msg_subject": "function performs differently with different values"
},
{
"msg_contents": "On Sat, Apr 10, 2010 at 4:47 PM, Ben Chobot <[email protected]> wrote:\n> My understanding is that this generally happens because the plan should be different for the different values, but the first time the function is run it caches the plan for one of the values and will never use the appropriate plan for the second value.\n\nNo, it plans based on a sort of \"generic value\", not the first one you\nsupply. The way to get at that plan is:\n\nPREPARE foo AS <query>;\nEXPLAIN EXECUTE foo (parameters);\n\n...Robert\n",
"msg_date": "Mon, 12 Apr 2010 09:21:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function performs differently with different values"
}
] |
[
{
"msg_contents": "On 04/10/2010 12:29 AM, Tom Lane [[email protected]] wrote:\n> Seems a bit off-topic for pgsql-performance,\nWhat would be the appropriate forum?\n\n> but anyway: the main\n> known cause for that is if one of the tables used in the query got\n> dropped (by another session) just after the query started. Could\n> that have happened to you?\ninteresting and possible; but why doesn't locking prevent this?\n\nBrian\n",
"msg_date": "Sat, 10 Apr 2010 21:02:37 -0700",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"could not open relation...\""
},
{
"msg_contents": "Brian Cox <[email protected]> writes:\n> On 04/10/2010 12:29 AM, Tom Lane [[email protected]] wrote:\n>> but anyway: the main\n>> known cause for that is if one of the tables used in the query got\n>> dropped (by another session) just after the query started. Could\n>> that have happened to you?\n\n> interesting and possible; but why doesn't locking prevent this?\n\nLocking prevents you from getting some random internal failure\nthat would happen if the table disappeared mid-query. It cannot\neliminate the problem altogether, because the issue here is that\nthe dropping transaction got there first; there is no reason not\nto allow it to proceed. So the reading transaction is going to fail.\nThe most we could change is the spelling of the error message,\nbut I'm not sure that masking the fact that something odd happened\nis a good idea. (From the reader's viewpoint, it did see a table\nby that name in the catalogs, but by the time it acquired lock on\nthe table, the catalog entry was gone. Playing dumb and just saying\n\"table does not exist\" could be even more confusing than the current\nbehavior.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Apr 2010 15:25:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"could not open relation...\" "
}
] |
[
{
"msg_contents": "Ľubomír Varga wrote:\n \n> SELECT * FROM t_route\n> WHERE t_route.route_type_fk = 1\n> limit 4;\n \nThis one scanned the t_route table until it found four rows that\nmatched. It apparently didn't need to look at very many rows to find\nthe four matches, so it was fast.\n \n> SELECT * FROM t_route\n> WHERE t_route.route_type_fk =\n> (SELECT id FROM t_route_type WHERE type = 2)\n> limit 4;\n \nThis one came up with an id for a route type that didn't have any\nmatches in the t_route table, so it had to scan the entire t_route\ntable. (Based on your next query, the subquery probably returned\nNULL, so there might be room for some optimization here.) If you had\nchosen a route type with at least four matches near the start of the\nroute table, this query would have completed quickly.\n \n> SELECT * FROM t_route, t_route_type\n> WHERE t_route.route_type_fk = t_route_type.id\n> AND type = 2\n> limit 4;\n \nSince it didn't find any t_route_type row which matched, it knew\nthere couldn't be any output from the JOIN, so it skipped the scan of\nthe t_route table entirely.\n \n-Kevin\n\n\n",
"msg_date": "Sun, 11 Apr 2010 07:44:25 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some question"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm having a query where the planer chooses a very bad plan.\n\nexplain analyze SELECT * FROM \"telegrams\" WHERE ((recipient_id=508933 \nAND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) \nORDER BY id DESC LIMIT 10 OFFSET 0\n\n\"Limit (cost=0.00..1557.67 rows=10 width=78) (actual \ntime=0.096..2750.058 rows=5 loops=1)\"\n\" -> Index Scan Backward using telegrams_pkey on telegrams \n(cost=0.00..156545.47 rows=1005 width=78) (actual time=0.093..2750.052 \nrows=5 loops=1)\"\n\" Filter: (((recipient_id = 508933) AND (NOT recipient_deleted)) \nOR ((user_id = 508933) AND (NOT user_deleted)))\"\n\"Total runtime: 2750.124 ms\"\n\n\nWhen I force the planer not use do index scans, the plans looks MUCH \nbetter (10.000x faster):\n\nset enable_indexscan = false;\nexplain analyze SELECT * FROM \"telegrams\" WHERE ((recipient_id=508933 \nAND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) \nORDER BY id DESC LIMIT 10 OFFSET 0\n\n\"Limit (cost=2547.16..2547.16 rows=10 width=78) (actual \ntime=0.179..0.185 rows=5 loops=1)\"\n\" -> Sort (cost=2547.16..2547.41 rows=1005 width=78) (actual \ntime=0.177..0.178 rows=5 loops=1)\"\n\" Sort Key: id\"\n\" Sort Method: quicksort Memory: 26kB\"\n\" -> Bitmap Heap Scan on telegrams (cost=17.39..2544.98 \nrows=1005 width=78) (actual time=0.124..0.158 rows=5 loops=1)\"\n\" Recheck Cond: ((recipient_id = 508933) OR (user_id = \n508933))\"\n\" Filter: (((recipient_id = 508933) AND (NOT \nrecipient_deleted)) OR ((user_id = 508933) AND (NOT user_deleted)))\"\n\" -> BitmapOr (cost=17.39..17.39 rows=1085 width=0) \n(actual time=0.104..0.104 rows=0 loops=1)\"\n\" -> Bitmap Index Scan on telegrams_recipient \n(cost=0.00..8.67 rows=536 width=0) (actual time=0.033..0.033 rows=1 \nloops=1)\"\n\" Index Cond: (recipient_id = 508933)\"\n\" -> Bitmap Index Scan on telegrams_user \n(cost=0.00..8.67 rows=549 width=0) (actual time=0.069..0.069 rows=4 \nloops=1)\"\n\" Index Cond: (user_id = 508933)\"\n\"Total runtime: 0.276 ms\"\n\n\nThe table contains several millions records and it's just be \nreindexed/analyzed.\n\nAre there any parameters I can tune so that pgsql itself chooses the \nbest plan? :)\n\n# - Memory -\nshared_buffers = 256MB\ntemp_buffers = 32MB\nwork_mem = 4MB\nmaintenance_work_mem = 32MB\n\n# - Planner Cost Constants -\nseq_page_cost = 1.0\nrandom_page_cost = 2.5\ncpu_tuple_cost = 0.001\ncpu_index_tuple_cost = 0.0005\ncpu_operator_cost = 0.00025\neffective_cache_size = 20GB\n\n# - Genetic Query Optimizer -\ngeqo = on\n\nThanks,\nCorin\n\n",
"msg_date": "Sun, 11 Apr 2010 23:12:30 +0200",
"msg_from": "Corin <[email protected]>",
"msg_from_op": true,
"msg_subject": "planer chooses very bad plan"
},
{
"msg_contents": "On Sun, Apr 11, 2010 at 3:12 PM, Corin <[email protected]> wrote:\n> Hi,\n>\n> I'm having a query where the planer chooses a very bad plan.\n\nIn both instances your number of rows estimated is WAY higher than the\nactual number of rows returned. Perhaps if you increased\ndefault_statistics_target to 100, 200, 500 etc. re-analyzed, and then\nreun explain analyze again.\n\nAlso increasing work_mem might encourage the bitmap index scans to occur.\n",
"msg_date": "Sun, 11 Apr 2010 15:18:53 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planer chooses very bad plan"
},
{
"msg_contents": "On Sun, 2010-04-11 at 23:12 +0200, Corin wrote:\n> Hi,\n> \n> I'm having a query where the planer chooses a very bad plan.\n> \n> explain analyze SELECT * FROM \"telegrams\" WHERE ((recipient_id=508933 \n> AND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) \n> ORDER BY id DESC LIMIT 10 OFFSET 0\n> \n> \"Limit (cost=0.00..1557.67 rows=10 width=78) (actual \n> time=0.096..2750.058 rows=5 loops=1)\"\n> \" -> Index Scan Backward using telegrams_pkey on telegrams \n> (cost=0.00..156545.47 rows=1005 width=78) (actual time=0.093..2750.052 \n> rows=5 loops=1)\"\n> \" Filter: (((recipient_id = 508933) AND (NOT recipient_deleted)) \n> OR ((user_id = 508933) AND (NOT user_deleted)))\"\n> \"Total runtime: 2750.124 ms\"\n\nYou could check if creating special deleted_x indexes helps\n\ndo\n\nCREATE INDEX tgrm_deleted_recipent_index ON telegrams(recipient_id)\n WHERE recipient_deleted=FALSE;\n\nCREATE INDEX tgrm_deleted_user_index ON telegrams(user_id) \n WHERE user_deleted=FALSE;\n\n(if on live system, use \"CREATE INDEX CONCURRENTLY ...\")\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n",
"msg_date": "Mon, 12 Apr 2010 00:25:12 +0300",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planer chooses very bad plan"
},
{
"msg_contents": "On 11.04.2010 23:18, Scott Marlowe wrote:\n> In both instances your number of rows estimated is WAY higher than the\n> actual number of rows returned. Perhaps if you increased\n> default_statistics_target to 100, 200, 500 etc. re-analyzed, and then\n> reun explain analyze again.\n>\n> Also increasing work_mem might encourage the bitmap index scans to occur.\n> \nIncreasing the statistics >= 500 indeed helped a lot and causes the \nplanner to choose a good plan. :)\n\nI'm now thinking about increasing the default_statistics_target of the \nwhole server from the default (100) to 1000, because I have many tables \nwith similar data. As the size of the table index seems not change at \nall, I wonder how much additional storage is needed? I only care about \nruntime performance: are inserts/updates affected by this change? Or is \nonly analyze affected (only run once during the night)?\n\nThanks,\nCorin\n\n",
"msg_date": "Mon, 12 Apr 2010 00:41:19 +0200",
"msg_from": "Corin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planer chooses very bad plan"
},
{
"msg_contents": "On Sun, Apr 11, 2010 at 4:41 PM, Corin <[email protected]> wrote:\n> On 11.04.2010 23:18, Scott Marlowe wrote:\n>>\n>> In both instances your number of rows estimated is WAY higher than the\n>> actual number of rows returned. Perhaps if you increased\n>> default_statistics_target to 100, 200, 500 etc. re-analyzed, and then\n>> reun explain analyze again.\n>>\n>> Also increasing work_mem might encourage the bitmap index scans to occur.\n>>\n>\n> Increasing the statistics >= 500 indeed helped a lot and causes the planner\n> to choose a good plan. :)\n>\n> I'm now thinking about increasing the default_statistics_target of the whole\n> server from the default (100) to 1000, because I have many tables with\n> similar data. As the size of the table index seems not change at all, I\n> wonder how much additional storage is needed? I only care about runtime\n> performance: are inserts/updates affected by this change? Or is only analyze\n> affected (only run once during the night)?\n\ndefault stats target has more to do with how many distinct values /\nranges of values you have. If your data has a nice smooth curve of\ndistribution smaller values are ok. Large datasets with very weird\ndata distributions can throw off the planner.\n\nThere's a cost for both analyzing and for query planning. If 500\nfixes this table, and all the other tables are fine at 100 then it\nmight be worth doing an alter table alter column for just this column.\n However, then you've got to worry about time spent monitoring and\nanalyzing queries in the database for if / when they need a higher\nstats target.\n\nAlso, look at increasing effective cache size if the db fits into\nmemory. Lowering random page cost helps too.\n",
"msg_date": "Sun, 11 Apr 2010 16:49:49 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planer chooses very bad plan"
},
{
"msg_contents": "\n> explain analyze SELECT * FROM \"telegrams\" WHERE ((recipient_id=508933 \n> AND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) \n> ORDER BY id DESC LIMIT 10 OFFSET 0\n\nIf you need very fast performance on this query, you need to be able to \nuse the index for ordering.\n\nNote that the following query will only optimize the first page of results \nin the case you want to display BOTH sent and received telegrams.\n\n\n- Create an index on (recipient_id, id) WHERE NOT recipient_deleted\n- Create an index on (user_id, id) WHERE NOT user_deleted\n- Drop redundant indexes (recipient_id) and (user_id)\n\nSELECT * FROM (\nSELECT * FROM \"telegrams\" WHERE recipient_id=508933 AND \nrecipient_deleted=FALSE ORDER BY id DESC LIMIT 10\nUNION ALL\nSELECT * FROM \"telegrams\" WHERE user_id=508933 AND user_deleted=FALSE \nORDER BY id DESC LIMIT 10\n) AS foo ORDER BY id DESC LIMIT 10;\n\nThese indexes will also optimize the queries where you only display the \ninbox and outbox, in which case it will be able to use the index for \nordering on any page, because there will be no UNION.\n\n",
"msg_date": "Mon, 12 Apr 2010 12:06:41 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planer chooses very bad plan"
}
] |
[
{
"msg_contents": "Try random_page_cost=100 \r\n\r\n- Luke\r\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Sun Apr 11 14:12:30 2010\nSubject: [PERFORM] planer chooses very bad plan\r\n\r\nHi,\r\n\r\nI'm having a query where the planer chooses a very bad plan.\r\n\r\nexplain analyze SELECT * FROM \"telegrams\" WHERE ((recipient_id=508933 \r\nAND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) \r\nORDER BY id DESC LIMIT 10 OFFSET 0\r\n\r\n\"Limit (cost=0.00..1557.67 rows=10 width=78) (actual \r\ntime=0.096..2750.058 rows=5 loops=1)\"\r\n\" -> Index Scan Backward using telegrams_pkey on telegrams \r\n(cost=0.00..156545.47 rows=1005 width=78) (actual time=0.093..2750.052 \r\nrows=5 loops=1)\"\r\n\" Filter: (((recipient_id = 508933) AND (NOT recipient_deleted)) \r\nOR ((user_id = 508933) AND (NOT user_deleted)))\"\r\n\"Total runtime: 2750.124 ms\"\r\n\r\n\r\nWhen I force the planer not use do index scans, the plans looks MUCH \r\nbetter (10.000x faster):\r\n\r\nset enable_indexscan = false;\r\nexplain analyze SELECT * FROM \"telegrams\" WHERE ((recipient_id=508933 \r\nAND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) \r\nORDER BY id DESC LIMIT 10 OFFSET 0\r\n\r\n\"Limit (cost=2547.16..2547.16 rows=10 width=78) (actual \r\ntime=0.179..0.185 rows=5 loops=1)\"\r\n\" -> Sort (cost=2547.16..2547.41 rows=1005 width=78) (actual \r\ntime=0.177..0.178 rows=5 loops=1)\"\r\n\" Sort Key: id\"\r\n\" Sort Method: quicksort Memory: 26kB\"\r\n\" -> Bitmap Heap Scan on telegrams (cost=17.39..2544.98 \r\nrows=1005 width=78) (actual time=0.124..0.158 rows=5 loops=1)\"\r\n\" Recheck Cond: ((recipient_id = 508933) OR (user_id = \r\n508933))\"\r\n\" Filter: (((recipient_id = 508933) AND (NOT \r\nrecipient_deleted)) OR ((user_id = 508933) AND (NOT user_deleted)))\"\r\n\" -> BitmapOr (cost=17.39..17.39 rows=1085 width=0) \r\n(actual time=0.104..0.104 rows=0 loops=1)\"\r\n\" -> Bitmap Index Scan on telegrams_recipient \r\n(cost=0.00..8.67 rows=536 width=0) (actual time=0.033..0.033 rows=1 \r\nloops=1)\"\r\n\" Index Cond: (recipient_id = 508933)\"\r\n\" -> Bitmap Index Scan on telegrams_user \r\n(cost=0.00..8.67 rows=549 width=0) (actual time=0.069..0.069 rows=4 \r\nloops=1)\"\r\n\" Index Cond: (user_id = 508933)\"\r\n\"Total runtime: 0.276 ms\"\r\n\r\n\r\nThe table contains several millions records and it's just be \r\nreindexed/analyzed.\r\n\r\nAre there any parameters I can tune so that pgsql itself chooses the \r\nbest plan? :)\r\n\r\n# - Memory -\r\nshared_buffers = 256MB\r\ntemp_buffers = 32MB\r\nwork_mem = 4MB\r\nmaintenance_work_mem = 32MB\r\n\r\n# - Planner Cost Constants -\r\nseq_page_cost = 1.0\r\nrandom_page_cost = 2.5\r\ncpu_tuple_cost = 0.001\r\ncpu_index_tuple_cost = 0.0005\r\ncpu_operator_cost = 0.00025\r\neffective_cache_size = 20GB\r\n\r\n# - Genetic Query Optimizer -\r\ngeqo = on\r\n\r\nThanks,\r\nCorin\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Sun, 11 Apr 2010 14:22:33 -0700",
"msg_from": "Luke Lonergan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planer chooses very bad plan"
},
{
"msg_contents": "On 11.04.2010 23:22, Luke Lonergan wrote:\n> Try random_page_cost=100 \n> \nIncreasing random_page_const to 100 (it was 2.5 before) did not help, \nbut lowering it <=1.5 helped.\n\nAs almost the whole dataset fit's into memory, I think I'll change it \npermanently to 1.5 (seq_page is 1.0).\n\nI'll also increase the default_statistics to 1000, because this also \nseems to help a lot.\n\nThanks,\nCorin\n\n",
"msg_date": "Mon, 12 Apr 2010 00:44:01 +0200",
"msg_from": "Corin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planer chooses very bad plan"
}
] |
[
{
"msg_contents": "Hi,\n\n My database server get sluggish suddenly ,I check the vmstat as below:\n\n \n\n2010-04-07 04:03:15 procs memory swap io system cpu\n\n2010-04-07 04:03:15 r b swpd free buff cache si so bi bo in cs us sy id wa\n\n2010-04-07 14:04:27 3 0 0 2361272 272684 3096148 0 0 3 1445 973 14230 7 8 84 0\n\n2010-04-07 14:05:27 2 0 0 2361092 272684 3096220 0 0 3 1804 1029 31852 8 10 81 1\n\n2010-04-07 14:06:27 1 0 0 2362236 272684 3096564 0 0 3 1865 1135 19689 9 9 81 0\n\n2010-04-07 14:07:27 1 0 0 2348400 272720 3101836 0 0 3 1582 1182 149461 15 17 67 0\n\n2010-04-07 14:08:27 3 0 0 2392028 272840 3107600 0 0 3 3093 1275 203196 24 23 53 1\n\n2010-04-07 14:09:27 3 1 0 2386224 272916 3107960 0 0 3 2486 1331 193299 26 22 52 0\n\n2010-04-07 14:10:27 34 0 0 2332320 272980 3107944 0 0 3 1692 1082 214309 24 22 54 0\n\n2010-04-07 14:11:27 1 0 0 2407432 273028 3108092 0 0 6 2770 1540 76643 29 13 57 1\n\n2010-04-07 14:12:27 9 0 0 2358968 273104 3108388 0 0 7 2639 1466 10603 22 6 72 1\n\n \n\n My postgres version: 8.1.3; \n\nMy OS version: Linux version 2.4.21-47.Elsmp((Red Hat Linux 3.2.3-54)\n\nMy CPU:\n\nprocessor : 7\n\nvendor_id : GenuineIntel\n\ncpu family : 15\n\nmodel : 6\n\nmodel name : Intel(R) Xeon(TM) CPU 3.40GHz\n\nstepping : 8\n\ncpu MHz : 3400.262\n\ncache size : 1024 KB\n\nphysical id : 1\n\n \n\n \n\nI donnt know what make the “context-switching” storm ? \n\nHow should I investigate <javascript:;> the real reason ?\n\nCould you please give me some advice ? \n\n \n\nBest regards,\n\nRay Huang\n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n My database\nserver get sluggish suddenly ,I check the vmstat as below:\n \n2010-04-07 04:03:15\nprocs \nmemory \nswap io system \ncpu\n2010-04-07 04:03:15 \nr b swpd free buff \ncache si so bi \nbo in cs us sy id wa\n2010-04-07 14:04:27 \n3 0 0 2361272 272684\n3096148 0 0 3 \n1445 973 14230 7 8 84 0\n2010-04-07 14:05:27 \n2 0 0 2361092 272684\n3096220 0 0 3 \n1804 1029 31852 8 10 81 1\n2010-04-07 14:06:27 \n1 0 0 2362236 272684\n3096564 0 0 3 \n1865 1135 19689 9 9 81 0\n2010-04-07 14:07:27 \n1 0 0 2348400 272720\n3101836 0 0 3 \n1582 1182 149461 15 17 67 0\n2010-04-07 14:08:27 \n3 0 0 2392028 272840 3107600 \n0 0 3 3093 1275 203196 24 23\n53 1\n2010-04-07 14:09:27 \n3 1 0 2386224 272916\n3107960 0 0 3 \n2486 1331 193299 26 22 52 0\n2010-04-07 14:10:27 34 \n0 0 2332320 272980 3107944 \n0 0 3 1692 1082 214309 24 22\n54 0\n2010-04-07 14:11:27 \n1 0 0 2407432 273028\n3108092 0 0 6 \n2770 1540 76643 29 13 57 1\n2010-04-07 14:12:27 \n9 0 0 2358968 273104\n3108388 0 0 7 \n2639 1466 10603 22 6 72 1\n \n My postgres\nversion: 8.1.3; \nMy OS\nversion: Linux version 2.4.21-47.Elsmp((Red\nHat Linux 3.2.3-54)\nMy CPU:\nprocessor \n: 7\nvendor_id \n: GenuineIntel\ncpu\nfamily : 15\nmodel \n: 6\nmodel\nname : Intel(R) Xeon(TM) CPU 3.40GHz\nstepping \n: 8\ncpu\nMHz : 3400.262\ncache\nsize : 1024 KB\nphysical\nid : 1\n \n \nI donnt\nknow what make the “context-switching” storm ? \nHow should\nI investigate the real reason ?\nCould you\nplease give me some advice ? \n \nBest\nregards,\nRay Huang",
"msg_date": "Mon, 12 Apr 2010 14:35:32 +0800",
"msg_from": "=?gb2312?B?UkS7xtPAzsA=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?gb2312?B?SG93IHRvIGRpYWdub3NlIGEgobBjb250ZXh0LXN3aXRjaGluZyChsQ==?=\n\t=?gb2312?B?IHN0b3JtIHByb2JsZW0gPw==?="
},
{
"msg_contents": "2010/4/12 RD黄永卫 <[email protected]>:\n> I donnt know what make the \"context-switching\" storm ?\n>\n> How should I investigate the real reason ?\n>\n> Could you please give me some advice ?\n\nIt might be because of cascading locks so try to monitor them when it happens.\n\nYou may find this query useful:\n\nSELECT\n granted,\n count(1) AS locks,\n pid,\n now() - xact_start AS xact_age,\n now() - query_start AS query_age,\n current_query\nFROM\n pg_locks AS l\n LEFT JOIN pg_stat_activity AS a ON\n pid = procpid\nGROUP BY 1, 3, 4, 5, 6\nORDER BY 1 DESC, 2 DESC\n-- ORDER BY 4 DESC\nLIMIT 100;\n\n\n-- \nSergey Konoplev\n\nBlog: http://gray-hemp.blogspot.com /\nLinkedin: http://ru.linkedin.com/in/grayhemp /\nJID/GTalk: [email protected] / Skype: gray-hemp / ICQ: 29353802\n",
"msg_date": "Tue, 13 Apr 2010 01:27:15 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?windows-1252?Q?Re=3A_=5BPERFORM=5D_How_to_diagnose_a_=93context=2Dswitching?=\n\t=?windows-1252?Q?_=94_storm_problem_=3F?="
},
{
"msg_contents": "RD黄永卫<[email protected]> wrote: \n \n> My database server get sluggish suddenly\n \n> [vmstat output showing over 200,000 context switches per second]\n \n> My postgres version: 8.1.3;\n \nUpgrading should help. Later releases are less vulnerable to this.\n \n> Could you please give me some advice ?\n \nA connection pooler can often help. (e.g., pgpool or pgbouncer)\n \n-Kevin\n",
"msg_date": "Mon, 12 Apr 2010 16:39:48 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?ISO-8859-15?Q?Re:=20=20How=20to=20diagnose=20a=20*con?=\n\t=?ISO-8859-15?Q?text-switching=20*=20storm=20problem=20=3F?="
}
] |
[
{
"msg_contents": "Andrey,\n\n - Another idea for your problem is the one Kevin gave in the message following:\n\n##########################################################################################################################\n\n> > SELECT * FROM t_route\n> > WHERE t_route.route_type_fk = 1\n> > limit 4;\n> \n \nThis one scanned the t_route table until it found four rows that\nmatched. It apparently didn't need to look at very many rows to find\nthe four matches, so it was fast.\n \n\n> > SELECT * FROM t_route\n> > WHERE t_route.route_type_fk =\n> > (SELECT id FROM t_route_type WHERE type = 2)\n> > limit 4;\n> \n \nThis one came up with an id for a route type that didn't have any\nmatches in the t_route table, so it had to scan the entire t_route\ntable. (Based on your next query, the subquery probably returned\nNULL, so there might be room for some optimization here.) If you had\nchosen a route type with at least four matches near the start of the\nroute table, this query would have completed quickly.\n \n\n> > SELECT * FROM t_route, t_route_type\n> > WHERE t_route.route_type_fk = t_route_type.id\n> > AND type = 2\n> > limit 4;\n> \n \nSince it didn't find any t_route_type row which matched, it knew\nthere couldn't be any output from the JOIN, so it skipped the scan of\nthe t_route table entirely.\n \n-Kevin\n############################################################################################################## \n\n\nRegards....\n\n-------- Original Message --------\nFrom: \t- Fri Apr 9 17:36:41 2010\nX-Account-Key: \taccount3\nX-UIDL: \tGmailId127e449663a13d39\nX-Mozilla-Status: \t0011\nX-Mozilla-Status2: \t00000000\nX-Mozilla-Keys: \t\nDelivered-To: \[email protected]\nReceived: \tby 10.231.79.67 with SMTP id o3cs40933ibk; Fri, 9 Apr 2010 \n13:36:16 -0700 (PDT)\nReceived: \tby 10.114.248.22 with SMTP id v22mr967398wah.8.1270845368202; \nFri, 09 Apr 2010 13:36:08 -0700 (PDT)\nReturn-Path: \t<[email protected]>\nReceived: \tfrom maia-1.hub.org (maia-1.hub.org [200.46.208.211]) by \nmx.google.com with ESMTP id 8si1947813ywh.11.2010.04.09.13.36.07; Fri, \n09 Apr 2010 13:36:08 -0700 (PDT)\nReceived-SPF: \tneutral (google.com: 200.46.208.211 is neither permitted \nnor denied by best guess record for domain of \[email protected]) client-ip=200.46.208.211;\nAuthentication-Results: \tmx.google.com; spf=neutral (google.com: \n200.46.208.211 is neither permitted nor denied by best guess record for \ndomain of [email protected]) \[email protected]\nReceived: \tfrom postgresql.org (mail.postgresql.org [200.46.204.86]) by \nmaia-1.hub.org (Postfix) with ESMTP id 54BAEAFD1B6; Fri, 9 Apr 2010 \n20:36:00 +0000 (UTC)\nReceived: \tfrom maia.hub.org (unknown [200.46.204.183]) by \nmail.postgresql.org (Postfix) with ESMTP id 2E74B633047 for \n<[email protected]>; Thu, 8 Apr 2010 \n22:36:17 -0300 (ADT)\nReceived: \tfrom mail.postgresql.org ([200.46.204.86]) by maia.hub.org \n(mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024) with ESMTP id \n90832-06 for <[email protected]>; \nFri, 9 Apr 2010 01:36:06 +0000 (UTC)\nReceived: \tfrom news.hub.org (news.hub.org [200.46.204.72]) by \nmail.postgresql.org (Postfix) with ESMTP id BBD50632DC3 for \n<[email protected]>; Thu, 8 Apr 2010 22:36:06 -0300 (ADT)\nReceived: \tfrom news.hub.org (news.hub.org [200.46.204.72]) by \nnews.hub.org (8.14.3/8.14.3) with ESMTP id o391a091050073 for \n<[email protected]>; Thu, 8 Apr 2010 22:36:00 -0300 (ADT) \n(envelope-from [email protected])\nReceived: \t(from news@localhost) by news.hub.org (8.14.3/8.14.3/Submit) \nid o391DTvp041710 for [email protected]; Thu, 8 Apr 2010 \n22:13:29 -0300 (ADT) (envelope-from news)\nFrom: \tnorn <[email protected]>\nX-Newsgroups: \tpgsql.performance\nSubject: \tRe: [PERFORM] significant slow down with various LIMIT\nDate: \tThu, 8 Apr 2010 18:13:33 -0700 (PDT)\nOrganization: \thttp://groups.google.com\nLines: \t72\nMessage-ID: \n<[email protected]>\nReferences: \n<[email protected]> \n<[email protected]>\nMime-Version: \t1.0\nContent-Type: \ttext/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: \tquoted-printable\nX-Complaints-To: \[email protected]\nComplaints-To: \[email protected]\nInjection-Info: \t30g2000yqi.googlegroups.com; \nposting-host=94.78.201.171; \nposting-account=woDzKwoAAACEqYut1Qq-BHNhLOB-6ihP\nUser-Agent: \tG2/1.0\nX-HTTP-UserAgent: \tMozilla/5.0 (X11; U; Linux x86_64; en-US) \nAppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.368.0 \nSafari/533.4,gzip(gfe)\nTo: \[email protected]\nX-Virus-Scanned: \tMaia Mailguard 1.0.1\nX-Spam-Status: \tNo, hits=-0.74 tagged_above=-10 required=5 \ntests=BAYES_20=-0.74\nX-Spam-Level: \t\nX-Mailing-List: \tpgsql-performance\nList-Archive: \t<http://archives.postgresql.org/pgsql-performance>\nList-Help: \t<mailto:[email protected]?body=help>\nList-ID: \t<pgsql-performance.postgresql.org>\nList-Owner: \t<mailto:[email protected]>\nList-Post: \t<mailto:[email protected]>\nList-Subscribe: \n<mailto:[email protected]?body=sub%20pgsql-performance>\nList-Unsubscribe: \n<mailto:[email protected]?body=unsub%20pgsql-performance>\nPrecedence: \tbulk\nSender: \[email protected]\n\n\n\nKevin, thanks for your attention!\nI've read SlowQueryQuestions, but anyway can't find bottleneck...\n\nHere requested information:\nOS: Ubuntu 9.10 64bit, Postgresql 8.4.2 with Postgis\nHardware: AMD Phenom(tm) II X4 945, 8GB RAM, 2 SATA 750GB (pg db\ninstalled in software RAID 0)\nPlease also note that this hardware isn't dedicated DB server, but\nalso serve as web server and file server.\n\nI have about 3 million rows in core_object, 1.5 million in\nplugin_plugin_addr and 1.5 million in plugins_guide_address.\nWhen there were 300 000+ objects queries works perfectly, but as db\nenlarge things go worse...\n\n# select version();\nPostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real\n(Ubuntu 4.4.1-4ubuntu8) 4.4.1, 64-bit\n---postgresql.conf---\ndata_directory = '/mnt/fast/postgresql/8.4/main'\nhba_file = '/etc/postgresql/8.4/main/pg_hba.conf'\nident_file = '/etc/postgresql/8.4/main/pg_ident.conf'\nexternal_pid_file = '/var/run/postgresql/8.4-main.pid'\nlisten_addresses = 'localhost'\nport = 5432\nmax_connections = 250\nunix_socket_directory = '/var/run/postgresql'\nssl = true\nshared_buffers = 1024MB\ntemp_buffers = 16MB\nwork_mem = 128MB\nmaintenance_work_mem = 512MB\nfsync = off\nwal_buffers = 4MB\ncheckpoint_segments = 16\neffective_cache_size = 1536MB\nlog_min_duration_statement = 8000\nlog_line_prefix = '%t '\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\nstandard_conforming_strings = on\nescape_string_warning = off\nconstraint_exclusion = on\ncheckpoint_completion_target = 0.9\n---end postgresql.conf---\n\nI hope this help!\nAny ideas are appreciated!\n\n\nOn Apr 9, 12:44 am, [email protected] (\"Kevin Grittner\")\nwrote:\n>\n> Could you show us the output from \"select version();\", describe your\n> hardware and OS, and show us the contents of your postgresql.conf\n> file (with all comments removed)? We can then give more concrete\n> advice than is possible with the information provided so far.\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\nAndrey,\n\n - Another idea for your problem is the one Kevin gave in the message following:\n\n##########################################################################################################################\n\n\n> SELECT * FROM t_route\n> WHERE t_route.route_type_fk = 1\n> limit 4;\n \n\n \nThis one scanned the t_route table until it found four rows that\nmatched. It apparently didn't need to look at very many rows to find\nthe four matches, so it was fast.\n \n\n\n> SELECT * FROM t_route\n> WHERE t_route.route_type_fk =\n> (SELECT id FROM t_route_type WHERE type = 2)\n> limit 4;\n \n\n \nThis one came up with an id for a route type that didn't have any\nmatches in the t_route table, so it had to scan the entire t_route\ntable. (Based on your next query, the subquery probably returned\nNULL, so there might be room for some optimization here.) If you had\nchosen a route type with at least four matches near the start of the\nroute table, this query would have completed quickly.\n \n\n\n> SELECT * FROM t_route, t_route_type\n> WHERE t_route.route_type_fk = t_route_type.id\n> AND type = 2\n> limit 4;\n \n\n \nSince it didn't find any t_route_type row which matched, it knew\nthere couldn't be any output from the JOIN, so it skipped the scan of\nthe t_route table entirely.\n \n-Kevin##############################################################################################################\n\n\nRegards....\n\n-------- Original Message --------\n\n\n\nFrom: \n- Fri Apr 9 17:36:41 2010\n\n\nX-Account-Key:\n \naccount3\n\n\nX-UIDL: \nGmailId127e449663a13d39\n\n\nX-Mozilla-Status:\n \n0011\n\n\nX-Mozilla-Status2:\n \n00000000\n\n\nX-Mozilla-Keys:\n \n\n\n\n\nDelivered-To:\n \[email protected]\n\n\nReceived: \nby 10.231.79.67 with SMTP id o3cs40933ibk; Fri, 9 Apr 2010\n13:36:16 -0700 (PDT)\n\n\nReceived: \nby 10.114.248.22 with SMTP id v22mr967398wah.8.1270845368202;\nFri, 09 Apr 2010 13:36:08 -0700 (PDT)\n\n\nReturn-Path: \n<[email protected]>\n\n\nReceived: \nfrom maia-1.hub.org (maia-1.hub.org [200.46.208.211]) by\nmx.google.com with ESMTP id 8si1947813ywh.11.2010.04.09.13.36.07; Fri,\n09 Apr 2010 13:36:08 -0700 (PDT)\n\n\nReceived-SPF:\n \nneutral (google.com: 200.46.208.211 is neither permitted nor\ndenied by best guess record for domain of\[email protected]) client-ip=200.46.208.211;\n\n\nAuthentication-Results:\n \nmx.google.com; spf=neutral (google.com: 200.46.208.211 is\nneither permitted nor denied by best guess record for domain of\[email protected])\[email protected]\n\n\nReceived: \nfrom postgresql.org (mail.postgresql.org [200.46.204.86]) by\nmaia-1.hub.org (Postfix) with ESMTP id 54BAEAFD1B6; Fri, 9 Apr 2010\n20:36:00 +0000 (UTC)\n\n\nReceived: \nfrom maia.hub.org (unknown [200.46.204.183]) by\nmail.postgresql.org (Postfix) with ESMTP id 2E74B633047 for\n<[email protected]>; Thu, 8\nApr 2010 22:36:17 -0300 (ADT)\n\n\nReceived: \nfrom mail.postgresql.org ([200.46.204.86]) by maia.hub.org\n(mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024) with ESMTP id\n90832-06 for\n<[email protected]>; Fri, 9\nApr 2010 01:36:06 +0000 (UTC)\n\n\nReceived: \nfrom news.hub.org (news.hub.org [200.46.204.72]) by\nmail.postgresql.org (Postfix) with ESMTP id BBD50632DC3 for\n<[email protected]>; Thu, 8 Apr 2010 22:36:06\n-0300 (ADT)\n\n\nReceived: \nfrom news.hub.org (news.hub.org [200.46.204.72]) by\nnews.hub.org (8.14.3/8.14.3) with ESMTP id o391a091050073 for\n<[email protected]>; Thu, 8 Apr 2010 22:36:00\n-0300 (ADT) (envelope-from [email protected])\n\n\nReceived: \n(from news@localhost) by news.hub.org (8.14.3/8.14.3/Submit)\nid o391DTvp041710 for [email protected]; Thu, 8 Apr 2010\n22:13:29 -0300 (ADT) (envelope-from news)\n\n\nFrom: \nnorn <[email protected]>\n\n\nX-Newsgroups:\n \npgsql.performance\n\n\nSubject: \nRe: [PERFORM] significant slow down with various LIMIT\n\n\nDate: \nThu, 8 Apr 2010 18:13:33 -0700 (PDT)\n\n\nOrganization:\n \nhttp://groups.google.com\n\n\nLines: \n72\n\n\nMessage-ID: \n<[email protected]>\n\n\nReferences: \n<[email protected]>\n<[email protected]>\n\n\nMime-Version:\n \n1.0\n\n\nContent-Type:\n \ntext/plain; charset=ISO-8859-1\n\n\nContent-Transfer-Encoding:\n \nquoted-printable\n\n\nX-Complaints-To:\n \[email protected]\n\n\nComplaints-To:\n \[email protected]\n\n\nInjection-Info:\n \n30g2000yqi.googlegroups.com; posting-host=94.78.201.171;\nposting-account=woDzKwoAAACEqYut1Qq-BHNhLOB-6ihP\n\n\nUser-Agent: \nG2/1.0\n\n\nX-HTTP-UserAgent:\n \nMozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4\n(KHTML, like Gecko) Chrome/5.0.368.0 Safari/533.4,gzip(gfe)\n\n\nTo: \[email protected]\n\n\nX-Virus-Scanned:\n \nMaia Mailguard 1.0.1\n\n\nX-Spam-Status:\n \nNo, hits=-0.74 tagged_above=-10 required=5\ntests=BAYES_20=-0.74\n\n\nX-Spam-Level:\n \n\n\n\n\nX-Mailing-List:\n \npgsql-performance\n\n\nList-Archive:\n \n<http://archives.postgresql.org/pgsql-performance>\n\n\nList-Help: \n<mailto:[email protected]?body=help>\n\n\nList-ID: \n<pgsql-performance.postgresql.org>\n\n\nList-Owner: \n<mailto:[email protected]>\n\n\nList-Post: \n<mailto:[email protected]>\n\n\nList-Subscribe:\n \n<mailto:[email protected]?body=sub%20pgsql-performance>\n\n\nList-Unsubscribe:\n \n<mailto:[email protected]?body=unsub%20pgsql-performance>\n\n\nPrecedence: \nbulk\n\n\nSender: \[email protected]\n\n\n\n\n\nKevin, thanks for your attention!\nI've read SlowQueryQuestions, but anyway can't find bottleneck...\n\nHere requested information:\nOS: Ubuntu 9.10 64bit, Postgresql 8.4.2 with Postgis\nHardware: AMD Phenom(tm) II X4 945, 8GB RAM, 2 SATA 750GB (pg db\ninstalled in software RAID 0)\nPlease also note that this hardware isn't dedicated DB server, but\nalso serve as web server and file server.\n\nI have about 3 million rows in core_object, 1.5 million in\nplugin_plugin_addr and 1.5 million in plugins_guide_address.\nWhen there were 300 000+ objects queries works perfectly, but as db\nenlarge things go worse...\n\n# select version();\nPostgreSQL 8.4.2 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real\n(Ubuntu 4.4.1-4ubuntu8) 4.4.1, 64-bit\n---postgresql.conf---\ndata_directory = '/mnt/fast/postgresql/8.4/main'\nhba_file = '/etc/postgresql/8.4/main/pg_hba.conf'\nident_file = '/etc/postgresql/8.4/main/pg_ident.conf'\nexternal_pid_file = '/var/run/postgresql/8.4-main.pid'\nlisten_addresses = 'localhost'\nport = 5432\nmax_connections = 250\nunix_socket_directory = '/var/run/postgresql'\nssl = true\nshared_buffers = 1024MB\ntemp_buffers = 16MB\nwork_mem = 128MB\nmaintenance_work_mem = 512MB\nfsync = off\nwal_buffers = 4MB\ncheckpoint_segments = 16\neffective_cache_size = 1536MB\nlog_min_duration_statement = 8000\nlog_line_prefix = '%t '\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\nstandard_conforming_strings = on\nescape_string_warning = off\nconstraint_exclusion = on\ncheckpoint_completion_target = 0.9\n---end postgresql.conf---\n\nI hope this help!\nAny ideas are appreciated!\n\n\nOn Apr 9, 12:44 am, [email protected] (\"Kevin Grittner\")\nwrote:\n>\n> Could you show us the output from \"select version();\", describe your\n> hardware and OS, and show us the contents of your postgresql.conf\n> file (with all comments removed)? We can then give more concrete\n> advice than is possible with the information provided so far.\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 12 Apr 2010 07:23:20 -0300",
"msg_from": "Helio Campos Mello de Andrade <[email protected]>",
"msg_from_op": true,
"msg_subject": "significant slow down with various LIMIT"
}
] |
[
{
"msg_contents": "[rearranging to put related information together]\n \nnorn \n \nSince the LIMIT 3 and LIMIT 4 queries generated exactly the same\nplan, the increased time for LIMIT 4 suggests that there are 3\nmatching rows which are near the end of the index it is scanning, but\nthe fourth one is much farther in.\n \nSince what you're showing suggests that the active portion of your\ndata is heavily cached, you might benefit from decreasing\nrandom_page_cost, and possibly also seq_page_cost.\n \n> 8GB RAM\n \n> effective_cache_size = 1536MB\n \n> Please also note that this hardware isn't dedicated DB server, but\n> also serve as web server and file server.\n \nEven with those other uses, you're likely to actually be using 6 GB\nor 7 GB for cache. I'd set effective_cache_size in that range.\n \n> max_connections = 250\n> work_mem = 128MB\n \nWhile probably not related to this problem, that's a dangerous\ncombination. What if all 250 connections are active with a query\nwhich uses work_mem memory? A single connection can actually be\nusing several work_mem allocations at once.\n \n> 2 SATA 750GB (pg db installed in software RAID 0)\n \nYou do realize that if either drive dies you lose all your data on\nthat pair of drives, right? I hope the value of the data and well\ntested backup procedures keeps the loss to something which is\nacceptable.\n \n> I have about 3 million rows in core_object, 1.5 million in\n> plugin_plugin_addr and 1.5 million in plugins_guide_address.\n> When there were 300 000+ objects queries works perfectly, but as db\n> enlarge things go worse...\n \nWith a relational database, it's not unusual for the most efficient\nplan to depend on the quantity of data in the tables. It is\nimportant that your statistics are kept up-to-date so that plans can\nadapt to the changing table sizes or data distributions. The\neffective_cache_size and cost parameters are also used to calculate\nthe costs of various plans, so adjusting those may help the optimizer\nmake good choices.\n \n-Kevin\n\n",
"msg_date": "Mon, 12 Apr 2010 07:09:08 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "Kevin,\nI appreciate your help very much!\n\n> Since the LIMIT 3 and LIMIT 4 queries generated exactly the same\n> plan, the increased time for LIMIT 4 suggests that there are 3\n> matching rows which are near the end of the index it is scanning, but\n> the fourth one is much farther in.\nYes, you are right, I checked id of the rows and found that problem\noccurred when one of id is 1377077 and next one is 132604.\nThis table was modified with several new rows and the problem now\nhappens between limit 4 and 5, this is another evidence of your\nrightness!\n\nFollowed by your advices I set the following:\neffective_cache_size=6144MB\nrandom_page_cost=0.25\nseq_page_cost = 0.25\nmax_connections = 50 # thanks for pointing! I don't really need so\nmuch\n\n> > 2 SATA 750GB (pg db installed in software RAID 0)\n>\n> You do realize that if either drive dies you lose all your data on\n> that pair of drives, right? I hope the value of the data and well\n> tested backup procedures keeps the loss to something which is\n> acceptable.\nThanks for attention! I have some regular backup procedures already,\nso there are no extra risk related to drive failure...\n\nI restarted Postgresql with new settings and got no performance\nimprovements in this particular query...\nDo you have ideas how much random_page_cost and seq_page_cost should\nbe decreased?\nAlso I wish to notice, that I made REINDEX DATABASE while tried to\nsolve the problem by myself.. this doesn't help...\n\n",
"msg_date": "Mon, 12 Apr 2010 06:32:02 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "norn <[email protected]> wrote:\n \n> I restarted Postgresql with new settings and got no performance\n> improvements in this particular query...\n \nThe cost settings help the optimizer make good decisions about plan\nchoice. I guess I don't have much reason to believe, at this point,\nthat there is a better plan for it to choose for this query. Do you\nthink you see one? What would that be? (We might be able to force\nthat plan and find out if you're right, which can be a valuable\ndiagnostic step, even if the way it gets forced isn't a\nproduction-quality solution.)\n \nAre you able to share the table descriptions? (That might help us\nsuggest an index or some such which might help.)\n \n> Do you have ideas how much random_page_cost and seq_page_cost\n> should be decreased?\n \nIt really depends on how much of your active data set is cached. If\nit is effectively fully cached, you might want to go to 0.01 for\nboth (or even lower). Many of our databases perform best with\nseq_page_cost = 1 and random_page_cost = 2. With some, either of\nthose \"extremes\" causes some queries to optimize poorly, and we've\nhad luck with 0.3 and 0.5. This is one worth testing with your\nworkload, because you can make some queries faster at the expense of\nothers; sometimes it comes down to which needs better response time\nto keep your users happy.\n \n-Kevin\n",
"msg_date": "Mon, 12 Apr 2010 16:28:36 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "On Apr 13, 5:28 am, [email protected] (\"Kevin Grittner\")\nwrote:\n> The cost settings help the optimizer make good decisions about plan\n> choice. I guess I don't have much reason to believe, at this point,\n> that there is a better plan for it to choose for this query. Do you\n> think you see one? What would that be? (We might be able to force\n> that plan and find out if you're right, which can be a valuable\n> diagnostic step, even if the way it gets forced isn't a\n> production-quality solution.)\nI have no deep knowledge of Postgresql, so I've no idea which plan is\nthe best, but I am wondering why there are so big gap between two\nlimits and how to avoid this...\n\n> Are you able to share the table descriptions? (That might help us\n> suggest an index or some such which might help.)\nsure, here it is\n\n# \\d core_object\n Table \"public.core_object\"\n Column | Type |\nModifiers\n-----------+---------\n+----------------------------------------------------------\n id | integer | not null default\nnextval('core_object_id_seq'::regclass)\n typeid_id | integer | not\nnull\nIndexes:\n \"core_object_pkey\" PRIMARY KEY, btree\n(id)\n \"core_object_pkey_desc\" btree (id\nDESC)\n \"core_object_typeid_id\" btree\n(typeid_id)\nForeign-key\nconstraints:\n \"core_object_typeid_id_fkey\" FOREIGN KEY (typeid_id) REFERENCES\ncore_obj_typeset(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"plugins_plugin_addr\" CONSTRAINT\n\"plugins_plugin_addr_oid_id_fkey\" FOREIGN KEY (oid_id) REFERENCES\ncore_object(id) DEFERRABLE INITIALLY\nDEFERRED\n...and many others, so I skipped as irrelevant....\n\n# \\d plugins_plugin_addr\n Table \"public.plugins_plugin_addr\"\n Column | Type | Modifiers\n---------------+---------\n+------------------------------------------------------------------\n id | integer | not null default\nnextval('plugins_plugin_addr_id_seq'::regclass)\n oid_id | integer | not null\n sub_attrib_id | integer | not null\n address_id | integer | not null\nIndexes:\n \"plugins_plugin_addr_pkey\" PRIMARY KEY, btree (id)\n \"plugins_plugin_addr_sub_attrib_id_key\" UNIQUE, btree\n(sub_attrib_id)\n \"plugins_plugin_addr_address_id\" btree (address_id)\n \"plugins_plugin_addr_oid_id\" btree (oid_id)\nForeign-key constraints:\n \"plugins_plugin_addr_address_id_fkey\" FOREIGN KEY (address_id)\nREFERENCES plugins_guide_address(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_plugin_addr_oid_id_fkey\" FOREIGN KEY (oid_id) REFERENCES\ncore_object(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_plugin_addr_sub_attrib_id_fkey\" FOREIGN KEY\n(sub_attrib_id) REFERENCES plugins_sub_attrib(id) DEFERRABLE INITIALLY\nDEFERRED\n\n# \\d plugins_guide_address\n Table\n\"public.plugins_guide_address\"\n Column | Type |\nModifiers\n--------------+------------------------\n+--------------------------------------------------------------------\n id | integer | not null default\nnextval('plugins_guide_address_id_seq'::regclass)\n country_id | integer |\n region_id | integer |\n city_id | integer |\n zip_id | integer |\n street_id | integer |\n house | character varying(20) |\n district_id | integer |\n code | character varying(23) |\n significance | smallint |\n alias_fr | character varying(300) |\n alias_ru | character varying(300) |\n alias_en | character varying(300) |\n alias_de | character varying(300) |\n alias_it | character varying(300) |\n alias_len | smallint |\nIndexes:\n \"plugins_guide_address_pkey\" PRIMARY KEY, btree (id)\n \"plugins_guide_address_uniq\" UNIQUE, btree (country_id, region_id,\ndistrict_id, city_id, street_id, house)\n \"plugins_guide_address_alias_ru\" btree (alias_ru)\n \"plugins_guide_address_city_id\" btree (city_id)\n \"plugins_guide_address_code\" btree (code)\n \"plugins_guide_address_country_id\" btree (country_id)\n \"plugins_guide_address_district_id\" btree (district_id)\n \"plugins_guide_address_house\" btree (house)\n \"plugins_guide_address_house_upper\" btree (upper(house::text))\n \"plugins_guide_address_region_id\" btree (region_id)\n \"plugins_guide_address_significance\" btree (significance)\n \"plugins_guide_address_street_id\" btree (street_id)\n \"plugins_guide_address_zip_id\" btree (zip_id)\nForeign-key constraints:\n \"plugins_guide_address_city_id_fkey\" FOREIGN KEY (city_id)\nREFERENCES plugins_guide_city(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_guide_address_country_id_fkey\" FOREIGN KEY (country_id)\nREFERENCES plugins_guide_country(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_guide_address_district_id_fkey\" FOREIGN KEY (district_id)\nREFERENCES plugins_guide_district(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_guide_address_region_id_fkey\" FOREIGN KEY (region_id)\nREFERENCES plugins_guide_region(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_guide_address_street_id_fkey\" FOREIGN KEY (street_id)\nREFERENCES plugins_guide_street(id) DEFERRABLE INITIALLY DEFERRED\n \"plugins_guide_address_zip_id_fkey\" FOREIGN KEY (zip_id)\nREFERENCES plugins_guide_zip(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n TABLE \"plugins_guide_ziphelper\" CONSTRAINT\n\"plugins_guide_ziphelper_address_id_fkey\" FOREIGN KEY (address_id)\nREFERENCES plugins_guide_address(id) DEFERRABLE INITIALLY DEFERRED\n TABLE \"plugins_plugin_addr\" CONSTRAINT\n\"plugins_plugin_addr_address_id_fkey\" FOREIGN KEY (address_id)\nREFERENCES plugins_guide_address(id) DEFERRABLE INITIALLY DEFERRED\n\n------------end---------------\n",
"msg_date": "Mon, 12 Apr 2010 23:07:19 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "norn <[email protected]> wrote:\n \n> I am wondering why there are so big gap between two limits and how\n> to avoid this...\n \nI think we've already established that it is because of the\npercentage of the table which must be scanned to get to the desired\nnumber of rows. The problem is exacerbated by the fact that it's a\n\"backward\" scan on the index, which is slower than a forward scan --\nmainly because disks spin in one direction, and the spacing of the\nsectors is optimized for forward scans.\n \nThere are a couple things to try which will give a more complete\npicture of what might work to make the run time more predictable. \nPlease try these, and run EXPLAIN ANALYZE of your problem query each\nway.\n \n(1) Try it without the ORDER BY clause and the LIMIT.\n \n(2) Temporarily take that top index out of consideration. (Don't\nworry, it'll come back when you issue the ROLLBACK -- just don't\nforget the BEGIN statement.)\n \nBEGIN;\nDROP INDEX plugins_plugin_addr_oid_id;\nexplain analyze <your query>\nROLLBACK;\n \n(3) Try it like this (untested, so you may need to fix it up):\n \nexplain analyze\nSELECT core_object.id\n from (SELECT id, city_id FROM \"plugins_guide_address\")\n \"plugins_guide_address\"\n JOIN \"plugins_plugin_addr\"\n ON (\"plugins_plugin_addr\".\"address_id\"\n = \"plugins_guide_address\".\"id\")\n JOIN \"core_object\"\n ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n WHERE \"plugins_guide_address\".\"city_id\" = 4535\n ORDER BY \"core_object\".\"id\" DESC\n LIMIT 4 -- or whatever it normally takes to cause the problem\n;\n \n-Kevin\n",
"msg_date": "Tue, 13 Apr 2010 12:24:18 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "I'm also wondering if a re-clustering of the table would work based on\nthe index that's used.\n\nsuch that:\n\nCLUSTER core_object USING plugins_plugin_addr_oid_id;\n\nand see if that makes any change in the differences that your seeing.\n\nOn 04/13/2010 02:24 PM, Kevin Grittner wrote:\n> norn <[email protected]> wrote:\n> \n> \n>> I am wondering why there are so big gap between two limits and how\n>> to avoid this...\n>> \n> \n> I think we've already established that it is because of the\n> percentage of the table which must be scanned to get to the desired\n> number of rows. The problem is exacerbated by the fact that it's a\n> \"backward\" scan on the index, which is slower than a forward scan --\n> mainly because disks spin in one direction, and the spacing of the\n> sectors is optimized for forward scans.\n> \n> There are a couple things to try which will give a more complete\n> picture of what might work to make the run time more predictable. \n> Please try these, and run EXPLAIN ANALYZE of your problem query each\n> way.\n> \n> (1) Try it without the ORDER BY clause and the LIMIT.\n> \n> (2) Temporarily take that top index out of consideration. (Don't\n> worry, it'll come back when you issue the ROLLBACK -- just don't\n> forget the BEGIN statement.)\n> \n> BEGIN;\n> DROP INDEX plugins_plugin_addr_oid_id;\n> explain analyze <your query>\n> ROLLBACK;\n> \n> (3) Try it like this (untested, so you may need to fix it up):\n> \n> explain analyze\n> SELECT core_object.id\n> from (SELECT id, city_id FROM \"plugins_guide_address\")\n> \"plugins_guide_address\"\n> JOIN \"plugins_plugin_addr\"\n> ON (\"plugins_plugin_addr\".\"address_id\"\n> = \"plugins_guide_address\".\"id\")\n> JOIN \"core_object\"\n> ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n> WHERE \"plugins_guide_address\".\"city_id\" = 4535\n> ORDER BY \"core_object\".\"id\" DESC\n> LIMIT 4 -- or whatever it normally takes to cause the problem\n> ;\n> \n> -Kevin\n>\n> \n\n",
"msg_date": "Tue, 13 Apr 2010 14:59:55 -0300",
"msg_from": "Chris Bowlby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> (3) Try it like this (untested, so you may need to fix it up):\n> \n> explain analyze\n> SELECT core_object.id\n> from (SELECT id, city_id FROM \"plugins_guide_address\")\n> \"plugins_guide_address\"\n> JOIN \"plugins_plugin_addr\"\n> ON (\"plugins_plugin_addr\".\"address_id\"\n> = \"plugins_guide_address\".\"id\")\n> JOIN \"core_object\"\n> ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n> WHERE \"plugins_guide_address\".\"city_id\" = 4535\n> ORDER BY \"core_object\".\"id\" DESC\n> LIMIT 4 -- or whatever it normally takes to cause the problem\n> ;\n \nHmph. I see I didn't take that quite where I intended.\nForget the above and try this:\n \nexplain analyze\nSELECT core_object.id\n from (SELECT id, city_id FROM \"plugins_guide_address\"\n WHERE \"city_id\" = 4535) \"plugins_guide_address\"\n JOIN \"plugins_plugin_addr\"\n ON (\"plugins_plugin_addr\".\"address_id\"\n = \"plugins_guide_address\".\"id\")\n JOIN \"core_object\"\n ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n ORDER BY \"core_object\".\"id\" DESC\n LIMIT 4 -- or whatever it normally takes to cause the problem\n;\n \n-Kevin\n",
"msg_date": "Wed, 14 Apr 2010 09:31:33 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "Kevin,\nthanks for your time!\nHere the requested tests.\n\n> (1) Try it without the ORDER BY clause and the LIMIT.\nW/o the 'order by' it works instantly (about 1ms!)\n Limit (cost=0.00..3.59 rows=5 width=4) (actual time=0.127..0.229\nrows=5 loops=1)\n -> Nested Loop (cost=0.00..277863.53 rows=386544 width=4) (actual\ntime=0.125..0.224 rows=5 loops=1)\n -> Nested Loop (cost=0.00..91136.78 rows=386544 width=4)\n(actual time=0.106..0.154 rows=5 loops=1)\n -> Index Scan using plugins_guide_address_city_id on\nplugins_guide_address (cost=0.00..41109.07 rows=27673 width=4)\n(actual time=0.068..0.080 rows=5 loops=1)\n Index Cond: (city_id = 4535)\n -> Index Scan using plugins_plugin_addr_address_id on\nplugins_plugin_addr (cost=0.00..1.63 rows=14 width=8) (actual\ntime=0.011..0.012 rows=1 loops=5)\n Index Cond: (plugins_plugin_addr.address_id =\nplugins_guide_address.id)\n -> Index Scan using core_object_pkey on core_object\n(cost=0.00..0.47 rows=1 width=4) (actual time=0.011..0.012 rows=1\nloops=5)\n Index Cond: (core_object.id =\nplugins_plugin_addr.oid_id)\n Total runtime: 0.328 ms\n(10 rows)\n\n\nW/o the limit it takes 1.4 seconds, which is anyway better than...\n Sort (cost=199651.74..200618.10 rows=386544 width=4) (actual\ntime=1153.167..1157.841 rows=43898 loops=1)\n Sort Key: core_object.id\n Sort Method: quicksort Memory: 3594kB\n -> Hash Join (cost=81234.35..163779.93 rows=386544 width=4)\n(actual time=122.050..1128.909 rows=43898 loops=1)\n Hash Cond: (core_object.id = plugins_plugin_addr.oid_id)\n -> Seq Scan on core_object (cost=0.00..46467.07\nrows=3221307 width=4) (actual time=0.011..378.677 rows=3221349\nloops=1)\n -> Hash (cost=76402.55..76402.55 rows=386544 width=4)\n(actual time=121.170..121.170 rows=43898 loops=1)\n -> Nested Loop (cost=368.81..76402.55 rows=386544\nwidth=4) (actual time=8.645..104.842 rows=43898 loops=1)\n -> Bitmap Heap Scan on plugins_guide_address\n(cost=368.81..26374.83 rows=27673 width=4) (actual time=8.599..15.590\nrows=26583 loops=1)\n Recheck Cond: (city_id = 4535)\n -> Bitmap Index Scan on\nplugins_guide_address_city_id (cost=0.00..361.89 rows=27673 width=0)\n(actual time=7.856..7.856 rows=26583 loops=1)\n Index Cond: (city_id = 4535)\n -> Index Scan using\nplugins_plugin_addr_address_id on plugins_plugin_addr\n(cost=0.00..1.63 rows=14 width=8) (actual time=0.002..0.003 rows=2\nloops=26583)\n Index Cond: (plugins_plugin_addr.address_id\n= plugins_guide_address.id)\n Total runtime: 1162.193 ms\n(15 rows)\n\n>(2) Temporarily take that top index out of consideration\nIt works nice! Query takes about 0.6 seconds as expected!\n\nexplain analyze SELECT core_object.id from \"core_object\" INNER JOIN\n\"plugins_plugin_addr\" ON (\"core_object\".\"id\" =\n\"plugins_plugin_addr\".\"oid_id\") INNER JOIN \"plugins_guide_address\" ON\n(\"plugins_plugin_addr\".\"address_id\" = \"plugins_guide_address\".\"id\")\nWHERE \"plugins_guide_address\".\"city_id\" = 4535 ORDER BY\n\"core_object\".\"id\" DESC;\n\n Limit (cost=112274.36..112275.66 rows=5 width=4) (actual\ntime=200.758..637.039 rows=5 loops=1)\n -> Merge Join (cost=112274.36..213042.22 rows=386544 width=4)\n(actual time=200.754..637.035 rows=5 loops=1)\n Merge Cond: (core_object.id = plugins_plugin_addr.oid_id)\n -> Index Scan Backward using core_object_pkey on\ncore_object (cost=0.00..86916.44 rows=3221307 width=4) (actual\ntime=0.115..302.512 rows=1374693 loops=1)\n -> Sort (cost=112274.36..113240.72 rows=386544 width=4)\n(actual time=154.635..154.635 rows=5 loops=1)\n Sort Key: plugins_plugin_addr.oid_id\n Sort Method: quicksort Memory: 3594kB\n -> Nested Loop (cost=368.81..76402.55 rows=386544\nwidth=4) (actual time=9.522..126.206 rows=43898 loops=1)\n -> Bitmap Heap Scan on plugins_guide_address\n(cost=368.81..26374.83 rows=27673 width=4) (actual time=9.367..21.311\nrows=26583 loops=1)\n Recheck Cond: (city_id = 4535)\n -> Bitmap Index Scan on\nplugins_guide_address_city_id (cost=0.00..361.89 rows=27673 width=0)\n(actual time=8.577..8.577 rows=26583 loops=1)\n Index Cond: (city_id = 4535)\n -> Index Scan using\nplugins_plugin_addr_address_id on plugins_plugin_addr\n(cost=0.00..1.63 rows=14 width=8) (actual time=0.002..0.003 rows=2\nloops=26583)\n Index Cond: (plugins_plugin_addr.address_id\n= plugins_guide_address.id)\n Total runtime: 637.620 ms\n(15 rows)\n\n\n> (3) Try it like this (untested, so you may need to fix it up):\nexplain analyze\nSELECT core_object.id\n from (SELECT id, city_id FROM \"plugins_guide_address\"\n WHERE \"city_id\" = 4535) \"plugins_guide_address\"\n JOIN \"plugins_plugin_addr\"\n ON (\"plugins_plugin_addr\".\"address_id\"\n = \"plugins_guide_address\".\"id\")\n JOIN \"core_object\"\n ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n ORDER BY \"core_object\".\"id\" DESC\n LIMIT 5;\n Limit (cost=0.00..11.51 rows=5 width=4) (actual\ntime=494.600..4737.867 rows=5 loops=1)\n -> Merge Join (cost=0.00..889724.50 rows=386544 width=4) (actual\ntime=494.599..4737.862 rows=5 loops=1)\n Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n -> Nested Loop (cost=0.00..789923.00 rows=386544 width=4)\n(actual time=450.359..4269.608 rows=5 loops=1)\n -> Index Scan Backward using\nplugins_plugin_addr_oid_id on plugins_plugin_addr\n(cost=0.00..45740.51 rows=1751340 width=8) (actual time=0.038..321.285\nrows=1374690 loops=1)\n -> Index Scan using plugins_guide_address_pkey on\nplugins_guide_address (cost=0.00..0.41 rows=1 width=4) (actual\ntime=0.003..0.003 rows=0 loops=1374690)\n Index Cond: (public.plugins_guide_address.id =\nplugins_plugin_addr.address_id)\n Filter: (public.plugins_guide_address.city_id =\n4535)\n -> Index Scan Backward using core_object_pkey on\ncore_object (cost=0.00..86916.44 rows=3221307 width=4) (actual\ntime=0.008..288.625 rows=1374693 loops=1)\n Total runtime: 4737.964 ms\n(10 rows)\n\nSo, as we can see, dropping index may help, but why? What shall I do\nin my particular situation? Probably analyzing my tests help you\ngiving some recommendations, I hope so! :)\n\nThanks again for your time!\n\nOn Apr 14, 10:31 pm, [email protected] (\"Kevin Grittner\")\nwrote:\n> \"Kevin Grittner\" <[email protected]> wrote:\n> > (3) Try it like this (untested, so you may need to fix it up):\n>\n> > explain analyze\n> > SELECT core_object.id\n> > from (SELECT id, city_id FROM \"plugins_guide_address\")\n> > \"plugins_guide_address\"\n> > JOIN \"plugins_plugin_addr\"\n> > ON (\"plugins_plugin_addr\".\"address_id\"\n> > = \"plugins_guide_address\".\"id\")\n> > JOIN \"core_object\"\n> > ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n> > WHERE \"plugins_guide_address\".\"city_id\" = 4535\n> > ORDER BY \"core_object\".\"id\" DESC\n> > LIMIT 4 -- or whatever it normally takes to cause the problem\n> > ;\n>\n> Hmph. I see I didn't take that quite where I intended.\n> Forget the above and try this:\n>\n> explain analyze\n> SELECT core_object.id\n> from (SELECT id, city_id FROM \"plugins_guide_address\"\n> WHERE \"city_id\" = 4535) \"plugins_guide_address\"\n> JOIN \"plugins_plugin_addr\"\n> ON (\"plugins_plugin_addr\".\"address_id\"\n> = \"plugins_guide_address\".\"id\")\n> JOIN \"core_object\"\n> ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n> ORDER BY \"core_object\".\"id\" DESC\n> LIMIT 4 -- or whatever it normally takes to cause the problem\n> ;\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 15 Apr 2010 07:23:28 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "norn <[email protected]> wrote:\n \n>> (1) Try it without the ORDER BY clause and the LIMIT.\n> W/o the 'order by' it works instantly (about 1ms!)\n \n> W/o the limit it takes 1.4 seconds\n \n>>(2) Temporarily take that top index out of consideration\n> It works nice! Query takes about 0.6 seconds as expected!\n \n> So, as we can see, dropping index may help, but why? What shall I\n> do in my particular situation? Probably analyzing my tests help\n> you giving some recommendations, I hope so! :)\n \nThe combination of the ORDER BY DESC and the LIMIT causes it to\nthink it can get the right data most quickly by scanning backwards\non the index. It's wrong about that. With the information from the\nadditional plans, it seems that this bad estimate might be why it's\nnot recognizing the plan which is actually four orders of magnitude\nfaster:\n \nIndex Scan using plugins_guide_address_city_id\n on plugins_guide_address\n Index Cond: (city_id = 4535)\n estimated rows=27673\n actual rows=5\n \nTry this:\n \nALTER TABLE ALTER plugins_guide_address\n ALTER COLUMN city_id SET STATISTICS 1000;\nANALYZE plugins_guide_address;\n \nThen try your query.\n \nI have one more diagnostic query to test, if the above doesn't work:\n \nexplain analyze\nSELECT id FROM\n (\n SELECT core_object.id\n FROM \"core_object\"\n JOIN \"plugins_plugin_addr\"\n ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n JOIN \"plugins_guide_address\"\n ON (\"plugins_plugin_addr\".\"address_id\" =\n \"plugins_guide_address\".\"id\")\n WHERE \"plugins_guide_address\".\"city_id\" = 4535\n ) x\n ORDER BY id DESC\n LIMIT 4;\n \n-Kevin\n",
"msg_date": "Tue, 20 Apr 2010 16:57:11 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "> Try this:\n>\n> ALTER TABLE ALTER plugins_guide_address\n> ALTER COLUMN city_id SET STATISTICS 1000;\n> ANALYZE plugins_guide_address;\n>\n> Then try your query.\nNo luck... The same query time...\n\n> I have one more diagnostic query to test, if the above doesn't work:\n>\n> explain analyze\n> SELECT id FROM\n> (\n> SELECT core_object.id\n> FROM \"core_object\"\n> JOIN \"plugins_plugin_addr\"\n> ON (\"core_object\".\"id\" = \"plugins_plugin_addr\".\"oid_id\")\n> JOIN \"plugins_guide_address\"\n> ON (\"plugins_plugin_addr\".\"address_id\" =\n> \"plugins_guide_address\".\"id\")\n> WHERE \"plugins_guide_address\".\"city_id\" = 4535\n> ) x\n> ORDER BY id DESC\n> LIMIT 4;\n\nLimit (cost=0.00..8.29 rows=4 width=4) (actual time=0.284..1322.792\nrows=4 loops=1)\n -> Merge Join (cost=0.00..993770.68 rows=479473 width=4) (actual\ntime=0.281..1322.787 rows=4 loops=1)\n Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)\n -> Nested Loop (cost=0.00..887841.46 rows=479473 width=4)\n(actual time=0.194..1201.318 rows=4 loops=1)\n -> Index Scan Backward using\nplugins_plugin_addr_oid_id on plugins_plugin_addr\n(cost=0.00..51546.26 rows=1980627 width=8) (actual time=0.117..87.035\nrows=359525 loops=1)\n -> Index Scan using plugins_guide_address_pkey on\nplugins_guide_address (cost=0.00..0.41 rows=1 width=4) (actual\ntime=0.003..0.003 rows=0 loops=359525)\n Index Cond: (plugins_guide_address.id =\nplugins_plugin_addr.address_id)\n Filter: (plugins_guide_address.city_id = 4535)\n -> Index Scan Backward using core_object_pkey on\ncore_object (cost=0.00..91309.16 rows=3450658 width=4) (actual\ntime=0.079..73.071 rows=359525 loops=1)\n Total runtime: 1323.065 ms\n(10 rows)\n\n",
"msg_date": "Wed, 21 Apr 2010 01:24:08 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "I wrote:\n \n> ALTER TABLE ALTER plugins_guide_address\n> ALTER COLUMN city_id SET STATISTICS 1000;\n \nOne too many ALTERs in there. Should be:\n \nALTER TABLE plugins_guide_address\n ALTER COLUMN city_id SET STATISTICS 1000;\n \n-Kevin\n",
"msg_date": "Wed, 21 Apr 2010 08:52:55 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: significant slow down with various LIMIT"
},
{
"msg_contents": "On Apr 21, 9:52 pm, [email protected] (\"Kevin Grittner\")\nwrote:\n> I wrote:\n> > ALTER TABLE ALTER plugins_guide_address\n> > ALTER COLUMN city_id SET STATISTICS 1000;\n>\n> One too many ALTERs in there. Should be:\n>\n> ALTER TABLE plugins_guide_address\n> ALTER COLUMN city_id SET STATISTICS 1000;\n\n\nYeah, I noticed it and ran correctly.\n",
"msg_date": "Wed, 21 Apr 2010 08:40:42 -0700 (PDT)",
"msg_from": "norn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: significant slow down with various LIMIT"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.