threads
listlengths
1
275
[ { "msg_contents": "Jean-Michel Pourᅵ<[email protected]> wrote:\n \n> [no postgresql.conf changes except]\n> shared_buffer 24M.\n \nThat's part of your problem. (Well, that's understating it; we don't\nhave any real evidence that you have any performance problems *not*\nresulting from failure to do normal configuration.) If you download\nthe configurator tool I referenced in another email you can get a\nstart on fixing this. Or search the lists -- this stuff had been\ndiscussed many, many times. Or do a google search; I typed\n \npostgresql performance configuration\n \nand got many hits. If you spend even one hour closely reviewing any\nof the top few hits and testing a more optimal configuration for your\nhardware and workload, most of your performance problems would\nprobably go away.\n \nThen, if you *still* have any performance problems, people could help\nyou diagnose and fine-tune from a base which should be much closer.\n \nThe truth is, with a proper configuration your \"biggest problem\",\nwhich you've misdiagnosed as the result of casting, would probably go\naway. I will take another look at it now that you have the results of\nEXPLAIN ANALYZE posted, but I seriously doubt that it's going to do\nwell without tuning the configuration.\n \n-Kevin\n", "msg_date": "Wed, 26 Aug 2009 18:03:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\ta domain derived from int" }, { "msg_contents": "I wrote:\n \n> I will take another look at it now that you have the results of\n> EXPLAIN ANALYZE posted\n \nCould you run this?:\n \nset work_mem = '50MB';\nset effective_cache_size = '3GB';\nEXPLAIN ANALYZE <your query>\nbegin transaction;\ndrop index node_comment_statistics_node_comment_timestamp_idx;\nEXPLAIN ANALYZE <your query>\nrollback transaction;\n \nThe BEGIN TRANSACTION and ROLLBACK TRANSACTION will prevent the index\nfrom actually being dropped; it just won't be visible to your query\nduring that second run. I'm kinda curious what plan it chooses\nwithout it.\n \nSome configuration options can be dynamically overridden for a\nparticular connection. This is not a complete list of what you might\nwant to use in your postgresql.conf file, but it might turn up an\ninteresting plan for diagnostic purposes.\n \n-Kevin\n\n", "msg_date": "Wed, 26 Aug 2009 18:27:29 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\ta domain derived from int" }, { "msg_contents": "Le mercredi 26 août 2009 à 18:03 -0500, Kevin Grittner a écrit :\n> That's part of your problem. \n\nSorry, I wrote that too quickly.\n\nMy configuration is (Quad core, 8Gb):\nshared_buffers = 2GB (WAS 1GB)\ntemp_buffers = 128MB (modified after reading your message)\nwork_mem = 512MB (modified after reading your message)\n\nStill casting.\n\n> If you download\n> the configurator tool I referenced in another email you can get a\n> start on fixing this.\n\nI do that immediately. \nThen I get back to you.\n\nWhy not include a configurator in PostgreSQL core?\n\nKind regards,\nJean-Michel", "msg_date": "Thu, 27 Aug 2009 15:36:10 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Dear Kevin,\n\nThanks for help!\n\nCould you run this?:\n> \n> set work_mem = '50MB';\n> set effective_cache_size = '3GB';\n> EXPLAIN ANALYZE <your query>\n> begin transaction;\n> drop index node_comment_statistics_node_comment_timestamp_idx;\n> EXPLAIN ANALYZE <your query>\n> rollback transaction;\n\nset work_mem = '50MB';\nset effective_cache_size = '1GB';\n\nEXPLAIN ANALYSE\nSELECT ncs.last_comment_timestamp, IF (ncs.last_comment_uid != 0,\nu2.name, ncs.last_comment_name) AS last_comment_name,\nncs.last_comment_uid\nFROM node n\nINNER JOIN users u1 ON n.uid = u1.uid\nINNER JOIN term_node tn ON n.vid = tn.vid\nINNER JOIN node_comment_statistics ncs ON n.nid = ncs.nid\nINNER JOIN users u2 ON ncs.last_comment_uid=u2.uid\nWHERE n.status = 1 AND tn.tid = 3\nORDER BY ncs.last_comment_timestamp DESC LIMIT 1 OFFSET 0 ;\n\n\"Limit (cost=0.00..544.67 rows=1 width=17) (actual\ntime=455.234..455.234 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49565.19 rows=91 width=17) (actual\ntime=455.232..455.232 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49538.56 rows=91 width=21) (actual\ntime=455.232..455.232 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49512.17 rows=91 width=13)\n(actual time=455.232..455.232 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..27734.58 rows=67486\nwidth=17) (actual time=0.027..264.540 rows=67486 loops=1)\"\n\" -> Index Scan Backward using\nnode_comment_statistics_node_comment_timestamp_idx on\nnode_comment_statistics ncs (cost=0.00..3160.99 rows=67486 width=13)\n(actual time=0.014..40.618 rows=67486 loops=1)\"\n\" -> Index Scan using node_pkey on node n\n(cost=0.00..0.35 rows=1 width=12) (actual time=0.002..0.003 rows=1\nloops=67486)\"\n\" Index Cond: (n.nid =\n(ncs.nid)::integer)\"\n\" Filter: (n.status = 1)\"\n\" -> Index Scan using term_node_vid_idx on term_node\ntn (cost=0.00..0.31 rows=1 width=4) (actual time=0.002..0.002 rows=0\nloops=67486)\"\n\" Index Cond: ((tn.vid)::integer =\n(n.vid)::integer)\"\n\" Filter: ((tn.tid)::integer = 3)\"\n\" -> Index Scan using users_pkey on users u2\n(cost=0.00..0.28 rows=1 width=12) (never executed)\"\n\" Index Cond: (u2.uid = ncs.last_comment_uid)\"\n\" -> Index Scan using users_pkey on users u1 (cost=0.00..0.28\nrows=1 width=4) (never executed)\"\n\" Index Cond: (u1.uid = n.uid)\"\n\"Total runtime: 455.311 ms\"\n\n> begin transaction;\n> drop index node_comment_statistics_node_comment_timestamp_idx;\n> EXPLAIN ANALYZE <your query>\n> rollback transaction;\n\nbegin transaction;\ndrop index node_comment_statistics_node_comment_timestamp_idx;\n\nEXPLAIN ANALYSE\nSELECT ncs.last_comment_timestamp, IF (ncs.last_comment_uid != 0, u2.name, ncs.last_comment_name) AS last_comment_name, ncs.last_comment_uid\nFROM node n\nINNER JOIN users u1 ON n.uid = u1.uid\nINNER JOIN term_node tn ON n.vid = tn.vid\nINNER JOIN node_comment_statistics ncs ON n.nid = ncs.nid\nINNER JOIN users u2 ON ncs.last_comment_uid=u2.uid\nWHERE n.status = 1 AND tn.tid = 3\nORDER BY ncs.last_comment_timestamp DESC LIMIT 1 OFFSET 0 ;\n\nrollback transaction;\n\nDoes not show any result because of ROLLBACK;\nThe query executes in 89 ms.\n\n> Some configuration options can be dynamically overridden for a\n> particular connection. This is not a complete list of what you might\n> want to use in your postgresql.conf file, but it might turn up an\n> interesting plan for diagnostic purposes.\n\nI am turning to configurator, stay tuned. \nAgain,thanks for your help.\n\nBye, Jean-Michel", "msg_date": "Thu, 27 Aug 2009 15:52:33 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Jean-Michel Pourᅵ<[email protected]> wrote:\n> Still casting.\n \nFor about the tenth time on the topic -- YOUR PROBLEM HAS NOTHING\nWHATSOEVER TO DO WITH CASTING! Let that go so you can look for the\nreal problem.\n \nJust as an example, look at this closely:\n \ntest=# create table t2 (c1 int not null primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"t2_pkey\" for table \"t2\"\nCREATE TABLE\ntest=# insert into t2 select * from generate_series(1,10000);\nINSERT 0 10000\ntest=# vacuum analyze;\nVACUUM\ntest=# explain analyze select count(*) from t1 where c1 between 200\nand 400;\n QUERY PLAN\n----------------------------------------------------------------------\n-------------------------------------------------\n Aggregate (cost=12.75..12.76 rows=1 width=0) (actual\ntime=0.764..0.765 rows=1 loops=1)\n -> Index Scan using t1_pkey on t1 (cost=0.00..12.25 rows=200\nwidth=0) (actual time=0.095..0.470 rows=201 loops=1)\n Index Cond: (((c1)::integer >= 200) AND ((c1)::integer <=\n400))\n Total runtime: 0.827 ms\n(4 rows)\n \nThe type is always put in there so that you can see what it's doing;\nit doesn't reflect anything which is actually taking any time.\n \nLet it go.\n \n-Kevin\n", "msg_date": "Thu, 27 Aug 2009 09:01:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\tadomain derived from int" }, { "msg_contents": "Le jeudi 27 août 2009 à 09:01 -0500, Kevin Grittner a écrit :\n> The type is always put in there so that you can see what it's doing;\n> it doesn't reflect anything which is actually taking any time.\n\nMy query plan for the same query is:\n\n\"Aggregate (cost=12.75..12.76 rows=1 width=0) (actual time=0.094..0.094\nrows=1 loops=1)\"\n\" -> Index Scan using t2_pkey on t2 (cost=0.00..12.25 rows=200\nwidth=0) (actual time=0.016..0.068 rows=201 loops=1)\"\n\" Index Cond: ((c1 >= 200) AND (c1 <= 400))\"\n\"Total runtime: 0.142 ms\"\n\nSo I don't see any :: in my results.\n\nIn my various query plans on my database, the :: is only displayed when\ncomparing int and int_unsigned. So I interpreted the :: as a cast.\n\nAre you sure that :: does not inform of a cast? Do we have documentation\nabout that?\n\nKind regards,\nJean-Michel", "msg_date": "Thu, 27 Aug 2009 16:14:39 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\tadomain derived from int" }, { "msg_contents": ">Jean-Michel Pourᅵ<[email protected]> wrote: \n> Does not show any result because of ROLLBACK;\n \nThen you need to use a better tool to run it. For example, in psql:\n \ntest=# create table t2 (c1 int not null);\nCREATE TABLE\ntest=# insert into t2 select * from generate_series(1,10000);\nINSERT 0 10000\ntest=# create unique index t2_c1 on t2 (c1);\nCREATE INDEX\ntest=# explain analyze select count(*) from t2 where c1 between 200 and\n400;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=47.77..47.78 rows=1 width=0) (actual\ntime=0.668..0.669 rows=1 loops=1)\n -> Bitmap Heap Scan on t2 (cost=4.76..47.64 rows=50 width=0)\n(actual time=0.091..0.380 rows=201 loops=1)\n Recheck Cond: ((c1 >= 200) AND (c1 <= 400))\n -> Bitmap Index Scan on t2_c1 (cost=0.00..4.75 rows=50\nwidth=0) (actual time=0.080..0.080 rows=201 loops=1)\n Index Cond: ((c1 >= 200) AND (c1 <= 400))\n Total runtime: 0.722 ms\n(6 rows)\n\ntest=# begin transaction;\nBEGIN\ntest=# drop index t2_c1;\nDROP INDEX\ntest=# explain analyze select count(*) from t2 where c1 between 200 and\n400;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Aggregate (cost=190.50..190.51 rows=1 width=0) (actual\ntime=3.324..3.325 rows=1 loops=1)\n -> Seq Scan on t2 (cost=0.00..190.00 rows=200 width=0) (actual\ntime=0.053..3.036 rows=201 loops=1)\n Filter: ((c1 >= 200) AND (c1 <= 400))\n Total runtime: 3.366 ms\n(4 rows)\n\ntest=# rollback transaction;\nROLLBACK\n \n-Kevin\n", "msg_date": "Thu, 27 Aug 2009 09:16:17 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\tadomain derived from int" }, { "msg_contents": "Le jeudi 27 août 2009 à 09:16 -0500, Kevin Grittner a écrit :\n> Then you need to use a better tool to run it. \n\nUnderstood, thanks.\n\ncms=# set work_mem = '50MB';\nSET\ncms=# set effective_cache_size = '1GB';\nSET\ncms=# begin transaction;\nBEGIN\ncms=# drop index node_comment_statistics_node_comment_timestamp_idx;\nDROP INDEX\ncms=# \ncms=# EXPLAIN ANALYSE\ncms-# SELECT ncs.last_comment_timestamp, IF (ncs.last_comment_uid != 0,\nu2.name, ncs.last_comment_name) AS last_comment_name,\nncs.last_comment_uid\ncms-# FROM node n\ncms-# INNER JOIN users u1 ON n.uid = u1.uid\ncms-# INNER JOIN term_node tn ON n.vid = tn.vid\ncms-# INNER JOIN node_comment_statistics ncs ON n.nid = ncs.nid\ncms-# INNER JOIN users u2 ON ncs.last_comment_uid=u2.uid\ncms-# WHERE n.status = 1 AND tn.tid = 3\ncms-# ORDER BY ncs.last_comment_timestamp DESC LIMIT 1 OFFSET 0 ;\ncms=# rollback transaction;\n\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=972.82..972.82 rows=1 width=17) (actual time=0.018..0.018\nrows=0 loops=1)\n -> Sort (cost=972.82..973.04 rows=91 width=17) (actual\ntime=0.018..0.018 rows=0 loops=1)\n Sort Key: ncs.last_comment_timestamp\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=4.96..972.36 rows=91 width=17) (actual\ntime=0.010..0.010 rows=0 loops=1)\n -> Nested Loop (cost=4.96..945.74 rows=91 width=21)\n(actual time=0.010..0.010 rows=0 loops=1)\n -> Nested Loop (cost=4.96..919.34 rows=91\nwidth=13) (actual time=0.010..0.010 rows=0 loops=1)\n -> Nested Loop (cost=4.96..890.02 rows=91\nwidth=8) (actual time=0.009..0.009 rows=0 loops=1)\n -> Bitmap Heap Scan on term_node tn\n(cost=4.96..215.63 rows=91 width=4) (actual time=0.009..0.009 rows=0\nloops=1)\n Recheck Cond: ((tid)::integer =\n3)\n -> Bitmap Index Scan on\nterm_node_tid_idx (cost=0.00..4.94 rows=91 width=0) (actual\ntime=0.008..0.008 rows=0 loops=1)\n Index Cond: ((tid)::integer\n= 3)\n -> Index Scan using node_vid_idx on\nnode n (cost=0.00..7.40 rows=1 width=12) (never executed)\n Index Cond: ((n.vid)::integer =\n(tn.vid)::integer)\n Filter: (n.status = 1)\n -> Index Scan using\nnode_comment_statistics_pkey on node_comment_statistics ncs\n(cost=0.00..0.31 rows=1 width=13) (never executed)\n Index Cond: ((ncs.nid)::integer =\nn.nid)\n -> Index Scan using users_pkey on users u2\n(cost=0.00..0.28 rows=1 width=12) (never executed)\n Index Cond: (u2.uid = ncs.last_comment_uid)\n -> Index Scan using users_pkey on users u1\n(cost=0.00..0.28 rows=1 width=4) (never executed)\n Index Cond: (u1.uid = n.uid)\n Total runtime: 0.092 ms\n(22 lignes)\n\n\nDoes it mean my index is broken and should be rebuilt?\n\nKind regards,\nJean-Michel", "msg_date": "Thu, 27 Aug 2009 16:37:42 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\tadomain derived from int" }, { "msg_contents": "Jean-Michel Pourᅵ<[email protected]> wrote:\n\n> ... Index Cond: ((tid)::integer = 3)\n \n> ... Index Cond: ((n.vid)::integer = (tn.vid)::integer)\n \n> ... Index Cond: ((ncs.nid)::integer = n.nid)\n \n> Total runtime: 0.092 ms\n \nSorry, but I just had to point that out.\nI feel much better now. ;-)\n \n> Does it mean my index is broken and should be rebuilt? \n \nNo, probably not.\n \nJust to get another data point, what happens if you run the same query\nwithout taking the index out of the picture, but without the LIMIT or\nOFFSET clauses? An EXPLAIN ANALYZE of that would help understand it\nmore fully.\n \n-Kevin\n", "msg_date": "Thu, 27 Aug 2009 09:52:28 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does CAST implicitely between int\n\tandadomain derived from int" }, { "msg_contents": "Le jeudi 27 août 2009 à 09:52 -0500, Kevin Grittner a écrit :\n> Just to get another data point, what happens if you run the same query\n> without taking the index out of the picture, but without the LIMIT or\n> OFFSET clauses? An EXPLAIN ANALYZE of that would help understand it\n> more fully.\n\nAlso, just a short notice that this SELECT returns no result. \n\nYou were right: adding LIMIT 1 changes speed from O.090 ms to 420 ms.\nThis has nothing to do with casting.\n\nEXPLAIN ANALYSE \nSELECT ncs.last_comment_timestamp, IF (ncs.last_comment_uid != 0,\nu2.name, ncs.last_comment_name) AS last_comment_name,\nncs.last_comment_uid\nFROM node n\nINNER JOIN users u1 ON n.uid = u1.uid\nINNER JOIN term_node tn ON n.vid = tn.vid\nINNER JOIN node_comment_statistics ncs ON n.nid = ncs.nid\nINNER JOIN users u2 ON ncs.last_comment_uid=u2.uid\nWHERE n.status = 1 AND tn.tid = 3\nORDER BY ncs.last_comment_timestamp DESC \n\n\"Sort (cost=975.32..975.55 rows=91 width=17) (actual time=0.021..0.021\nrows=0 loops=1)\"\n\" Sort Key: ncs.last_comment_timestamp\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" -> Nested Loop (cost=4.96..972.36 rows=91 width=17) (actual\ntime=0.016..0.016 rows=0 loops=1)\"\n\" -> Nested Loop (cost=4.96..945.74 rows=91 width=21) (actual\ntime=0.016..0.016 rows=0 loops=1)\"\n\" -> Nested Loop (cost=4.96..919.34 rows=91 width=13)\n(actual time=0.016..0.016 rows=0 loops=1)\"\n\" -> Nested Loop (cost=4.96..890.02 rows=91\nwidth=8) (actual time=0.016..0.016 rows=0 loops=1)\"\n\" -> Bitmap Heap Scan on term_node tn\n(cost=4.96..215.63 rows=91 width=4) (actual time=0.016..0.016 rows=0\nloops=1)\"\n\" Recheck Cond: ((tid)::integer = 3)\"\n\" -> Bitmap Index Scan on\nterm_node_tid_idx (cost=0.00..4.94 rows=91 width=0) (actual\ntime=0.014..0.014 rows=0 loops=1)\"\n\" Index Cond: ((tid)::integer = 3)\"\n\" -> Index Scan using node_vid_idx on node n\n(cost=0.00..7.40 rows=1 width=12) (never executed)\"\n\" Index Cond: ((n.vid)::integer =\n(tn.vid)::integer)\"\n\" Filter: (n.status = 1)\"\n\" -> Index Scan using node_comment_statistics_pkey\non node_comment_statistics ncs (cost=0.00..0.31 rows=1 width=13) (never\nexecuted)\"\n\" Index Cond: ((ncs.nid)::integer = n.nid)\"\n\" -> Index Scan using users_pkey on users u2\n(cost=0.00..0.28 rows=1 width=12) (never executed)\"\n\" Index Cond: (u2.uid = ncs.last_comment_uid)\"\n\" -> Index Scan using users_pkey on users u1 (cost=0.00..0.28\nrows=1 width=4) (never executed)\"\n\" Index Cond: (u1.uid = n.uid)\"\n\"Total runtime: 0.090 ms\"\n\nEXPLAIN ANALYSE \nSELECT ncs.last_comment_timestamp, IF (ncs.last_comment_uid != 0,\nu2.name, ncs.last_comment_name) AS last_comment_name,\nncs.last_comment_uid\nFROM node n\nINNER JOIN users u1 ON n.uid = u1.uid\nINNER JOIN term_node tn ON n.vid = tn.vid\nINNER JOIN node_comment_statistics ncs ON n.nid = ncs.nid\nINNER JOIN users u2 ON ncs.last_comment_uid=u2.uid\nWHERE n.status = 1 AND tn.tid = 3\nORDER BY ncs.last_comment_timestamp DESC \nLIMIT 1\n\n\"Limit (cost=0.00..544.67 rows=1 width=17) (actual\ntime=435.715..435.715 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49565.19 rows=91 width=17) (actual\ntime=435.713..435.713 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49538.56 rows=91 width=21) (actual\ntime=435.713..435.713 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49512.17 rows=91 width=13)\n(actual time=435.713..435.713 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..27734.58 rows=67486\nwidth=17) (actual time=0.029..252.443 rows=67486 loops=1)\"\n\" -> Index Scan Backward using\nnode_comment_statistics_node_comment_timestamp_idx on\nnode_comment_statistics ncs (cost=0.00..3160.99 rows=67486 width=13)\n(actual time=0.014..40.583 rows=67486 loops=1)\"\n\" -> Index Scan using node_pkey on node n\n(cost=0.00..0.35 rows=1 width=12) (actual time=0.002..0.003 rows=1\nloops=67486)\"\n\" Index Cond: (n.nid =\n(ncs.nid)::integer)\"\n\" Filter: (n.status = 1)\"\n\" -> Index Scan using term_node_vid_idx on term_node\ntn (cost=0.00..0.31 rows=1 width=4) (actual time=0.002..0.002 rows=0\nloops=67486)\"\n\" Index Cond: ((tn.vid)::integer =\n(n.vid)::integer)\"\n\" Filter: ((tn.tid)::integer = 3)\"\n\" -> Index Scan using users_pkey on users u2\n(cost=0.00..0.28 rows=1 width=12) (never executed)\"\n\" Index Cond: (u2.uid = ncs.last_comment_uid)\"\n\" -> Index Scan using users_pkey on users u1 (cost=0.00..0.28\nrows=1 width=4) (never executed)\"\n\" Index Cond: (u1.uid = n.uid)\"\n\"Total runtime: 435.788 ms\"\n\nEXPLAIN ANALYSE \nSELECT ncs.last_comment_timestamp, IF (ncs.last_comment_uid != 0,\nu2.name, ncs.last_comment_name) AS last_comment_name,\nncs.last_comment_uid\nFROM node n\nINNER JOIN users u1 ON n.uid = u1.uid\nINNER JOIN term_node tn ON n.vid = tn.vid\nINNER JOIN node_comment_statistics ncs ON n.nid = ncs.nid\nINNER JOIN users u2 ON ncs.last_comment_uid=u2.uid\nWHERE n.status = 1 AND tn.tid = 3\nORDER BY ncs.last_comment_timestamp DESC LIMIT 1 OFFSET 0\n\n\"Limit (cost=0.00..544.67 rows=1 width=17) (actual\ntime=541.488..541.488 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49565.19 rows=91 width=17) (actual\ntime=541.486..541.486 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49538.56 rows=91 width=21) (actual\ntime=541.485..541.485 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..49512.17 rows=91 width=13)\n(actual time=541.485..541.485 rows=0 loops=1)\"\n\" -> Nested Loop (cost=0.00..27734.58 rows=67486\nwidth=17) (actual time=0.024..307.341 rows=67486 loops=1)\"\n\" -> Index Scan Backward using\nnode_comment_statistics_node_comment_timestamp_idx on\nnode_comment_statistics ncs (cost=0.00..3160.99 rows=67486 width=13)\n(actual time=0.012..62.504 rows=67486 loops=1)\"\n\" -> Index Scan using node_pkey on node n\n(cost=0.00..0.35 rows=1 width=12) (actual time=0.003..0.003 rows=1\nloops=67486)\"\n\" Index Cond: (n.nid =\n(ncs.nid)::integer)\"\n\" Filter: (n.status = 1)\"\n\" -> Index Scan using term_node_vid_idx on term_node\ntn (cost=0.00..0.31 rows=1 width=4) (actual time=0.003..0.003 rows=0\nloops=67486)\"\n\" Index Cond: ((tn.vid)::integer =\n(n.vid)::integer)\"\n\" Filter: ((tn.tid)::integer = 3)\"\n\" -> Index Scan using users_pkey on users u2\n(cost=0.00..0.28 rows=1 width=12) (never executed)\"\n\" Index Cond: (u2.uid = ncs.last_comment_uid)\"\n\" -> Index Scan using users_pkey on users u1 (cost=0.00..0.28\nrows=1 width=4) (never executed)\"\n\" Index Cond: (u1.uid = n.uid)\"\n\"Total runtime: 541.568 ms\"", "msg_date": "Thu, 27 Aug 2009 17:10:22 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int\n\tandadomain derived from int" }, { "msg_contents": "Jean-Michel Pourᅵ<[email protected]> wrote:\n \n> Also, just a short notice that this SELECT returns no result.\n \nOnce you posted EXPLAIN ANALYZE results, that was clear because actual\nrows on the top line is zero.\n \n> You were right: adding LIMIT 1 changes speed from O.090 ms to 420\n> ms.\n \nIn summary, what's happening is that when the LIMIT 1 is there, the\noptimizer sees that the index will return rows in the order you\nrequested, and thinks that it won't have to read very far to get a\nmatch, at which point it would be able to stop. There are no matches,\nbut it has to read all the way through the index, pulling related rows\nto check for matches, before it can know that. Without the limit, it\noptimizes for the fastest plan which will scan all the rows. The\nfirst test returns nothing, so all the joins become very cheap -- they\nare never exercised.\n \nThis is related to a topic recently discussed on the hackers list --\nwhether the optimizer should be modified to recognize \"risky\" plans,\nand try to avoid them. This is another example of a query which might\nbenefit from such work.\n \nIt's also possible that this is another manifestation of an issue\nabout which there has been some dispute -- the decision to always\nround up any fraction on expected rows to the next whole number. I\ndon't know without doing more research, but it wouldn't shock me if\nthis rounding contributed to the optimizer's expectations that it\nwould get a match soon enough to make the problem plan a good one.\n \nIt is *possible* that if you boost your default_statistics_target and\nrun ANALYZE (or VACUUM ANALYZE), it will recognize that it isn't a\ngood idea to read backwards on that index. I would try it and see, if\nthat's practical for you. If not, you might be able to limit the\nplans that the optimizer considers using various techniques, but\nthat's a bit of a kludge; I'd save it for a last resort.\n \n> This has nothing to do with casting.\n \nYeah, that much was pretty apparent to many people from the start. It\nwas rather frustrating that you weren't listening on that point; I\nthink that resulted in you wasting time focusing on the wrong things\nand not moving in a productive direction sooner. As has been\nsuggested by someone else, you'll get better results presenting your\nproblem with as much relevant detail as possible and asking for help\nsorting it out, rather than putting too much emphasis on your\npreliminary guess as to the cause.\n \n-Kevin\n", "msg_date": "Thu, 27 Aug 2009 11:36:01 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\ta domain derived from int" }, { "msg_contents": "Le jeudi 27 août 2009 à 11:36 -0500, Kevin Grittner a écrit :\n> It is *possible* that if you boost your default_statistics_target and\n> run ANALYZE (or VACUUM ANALYZE), it will recognize that it isn't a\n> good idea to read backwards on that index. I would try it and see, if\n> that's practical for you. If not, you might be able to limit the\n> plans that the optimizer considers using various techniques, but\n> that's a bit of a kludge; I'd save it for a last resort.\n\nI will try that.\n\n> Yeah, that much was pretty apparent to many people from the start. It\n> was rather frustrating that you weren't listening on that point; I\n> think that resulted in you wasting time focusing on the wrong things\n> and not moving in a productive direction sooner. As has been\n> suggested by someone else, you'll get better results presenting your\n> problem with as much relevant detail as possible and asking for help\n> sorting it out, rather than putting too much emphasis on your\n> preliminary guess as to the cause.\n\nYeah. I will keep that in mind, don't worry.\n\nThis kind of slow queries on LIMIT seems to happen all the time on\nDrupal. Maybe it is because the site is not yet going live.\n\nAlso this means that Drupal on PostgreSQL could rock completely if/when\nthe optimizer has enough information to find the correct plan.\n\nIf you are interested, I can post on performance ML strange queries with\nLIMIT that may be interesting after we go life and have enough\nstatistics.\n\nMany thanks and bye,\nJean-Michel", "msg_date": "Thu, 27 Aug 2009 19:01:41 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL" }, { "msg_contents": "2009/8/27 Kevin Grittner <[email protected]>:\n> It is *possible* that if you boost your default_statistics_target and\n> run ANALYZE (or VACUUM ANALYZE), it will recognize that it isn't a\n> good idea to read backwards on that index.  I would try it and see, if\n> that's practical for you.\n\nI notice this in one of the plans:\n\n-> Bitmap Index Scan on term_node_tid_idx (cost=0.00..4.94 rows=91\nwidth=0) (actual time=0.014..0.014 rows=0 loops=1)\n Index Cond: ((tid)::integer = 3)\n\nThat's a pretty bad estimate for a scan of a single relation with a\nfilter on one column.\n\nI'd like to see the output of:\n\nSELECT MIN(tid), MAX(tid), SUM(1) FROM term_node;\nSHOW default_statistics_target;\n\nBy the way, why does EXPLAIN not display the name of the table as well\nas the index when it performs a bitmap index scan? It does do so for\na regular index scan.\n\nWhat version of PG is this again?\n\n...Robert\n", "msg_date": "Thu, 27 Aug 2009 13:35:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> By the way, why does EXPLAIN not display the name of the table as well\n> as the index when it performs a bitmap index scan?\n\nBecause that plan node is not in fact touching the table. The table\nname is shown in the BitmapHeapScan node that *does* touch the table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Aug 2009 14:05:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a domain derived\n\tfrom int" }, { "msg_contents": "Le jeudi 27 août 2009 à 13:35 -0400, Robert Haas a écrit :\n> SELECT MIN(tid), MAX(tid), SUM(1) FROM term_node;\n> SHOW default_statistics_target;\n\nSELECT MIN(tid), MAX(tid), SUM(1) FROM term_node;\n6;56;67479\n\nSHOW default_statistics_target;\n100\n\nFor information, if some hackers are interested and they belong to the\ncommunity for a long time, I can provide tables with data.\n\nKind regards,\nJean-Michel", "msg_date": "Fri, 28 Aug 2009 00:30:32 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Le jeudi 27 août 2009 à 14:05 -0400, Tom Lane a écrit :\n> tom lane\n\nDear Tom, \n\nWhy is the query planner displaying ::integer\nWhat does it mean?\n\nKind regards,\nJean-Michel", "msg_date": "Fri, 28 Aug 2009 00:31:51 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Jean-Michel Pour� wrote:\n-- Start of PGP signed section.\n> Le jeudi 27 ao?t 2009 ? 14:05 -0400, Tom Lane a ?crit :\n> > tom lane\n> \n> Dear Tom, \n> \n> Why is the query planner displaying ::integer\n> What does it mean?\n\n::integer casts a data type to INTEGER. It is the same as CAST().\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sat, 29 Aug 2009 11:16:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int\n\tand a domain derived from int" }, { "msg_contents": "Le samedi 29 août 2009 à 11:16 -0400, Bruce Momjian a écrit :\n> > Why is the query planner displaying ::integer\n> > What does it mean?\n> \n> ::integer casts a data type to INTEGER. It is the same as CAST().\n\nIn Drupal database, we have two types:\n\ninteger\nint_unsigned\n\nCREATE DOMAIN int_unsigned\n AS integer\nCONSTRAINT int_unsigned_check CHECK ((VALUE >= 0));\n\nWhy do queries cast between integer and int_unsigned?\n\nKind regards,\nJean-Michel", "msg_date": "Sat, 29 Aug 2009 19:38:41 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]> writes:\n> In Drupal database, we have two types:\n> CREATE DOMAIN int_unsigned\n> AS integer\n> CONSTRAINT int_unsigned_check CHECK ((VALUE >= 0));\n\n> Why do queries cast between integer and int_unsigned?\n\nThat domain doesn't have any operators of its own. To compare to\nanother value, or use an index, you have to cast it to integer which\ndoes have operators. It's a no-op cast, but logically necessary.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Aug 2009 13:44:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a domain derived\n\tfrom int" }, { "msg_contents": "Le samedi 29 août 2009 à 13:44 -0400, Tom Lane a écrit :\n> That domain doesn't have any operators of its own. To compare to\n> another value, or use an index, you have to cast it to integer which\n> does have operators. It's a no-op cast, but logically necessary.\n\nDear Tom,\n\nThanks for answering. On more question:\n\nDrupal makes use these no-op CREATE DOMAINs in the database schema :\n\nCREATE DOMAIN int_unsigned\n AS integer\n CONSTRAINT int_unsigned_check CHECK ((VALUE >= 0));\n\nCREATE DOMAIN bigint_unsigned\n AS bigint\n CONSTRAINT bigint_unsigned_check CHECK ((VALUE >= 0));\n\nCREATE DOMAIN smallint_unsigned\n AS smallint\n CONSTRAINT smallint_unsigned_check CHECK ((VALUE >= 0));\n\nCREATE DOMAIN varchar_ci\n AS character varying(255)\n DEFAULT ''::character varying\n NOT NULL;\n\nIn my slow queries, I can notice excessive no-op casts. Do you think\nthis could lead to excessive sequential scans? \n\nWhat do you recommend: using normal types and moving constraints in the\nDrupal database? Is PostgreSQL domain broken as it forces casting or is\nthis a no-op for performance?\n\nWhat do you recommend?\n\nKind regards,\nJean-Michel", "msg_date": "Sat, 29 Aug 2009 22:59:46 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a\n\tdomain derived from int" }, { "msg_contents": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]> writes:\n> What do you recommend: using normal types and moving constraints in the\n> Drupal database? Is PostgreSQL domain broken as it forces casting or is\n> this a no-op for performance?\n\nIn principle it should be an unnoticeable slowdown. In the past we've\nhad some issues with the planner failing to recognize possible\noptimizations when there was a cast in the way, but I'm not aware of\nany such bugs at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Aug 2009 17:45:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a domain derived\n\tfrom int" }, { "msg_contents": "On Sat, Aug 29, 2009 at 10:45 PM, Tom Lane<[email protected]> wrote:\n> Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]> writes:\n>> What do you recommend: using normal types and moving constraints in the\n>> Drupal database? Is PostgreSQL domain broken as it forces casting or is\n>> this a no-op for performance?\n>\n> In principle it should be an unnoticeable slowdown.  In the past we've\n> had some issues with the planner failing to recognize possible\n> optimizations when there was a cast in the way, but I'm not aware of\n> any such bugs at the moment.\n\nIn particular since your plan nodes are all index scans it's clear\nthat the casts are not getting in the way. The symptom when there were\nproblems was that the planner was forced to use sequential scans\nbecause it couldn't match the casted expressionto the index expression\nor some variant of that.\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Sat, 29 Aug 2009 22:48:31 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and a domain derived\n\tfrom int" } ]
[ { "msg_contents": "Hi, I have a question about a db-wide vacuum that I am running that is\ntaking a much longer time than normal. We switched over to our warm standby\nserver today -- which is virtually identical to the source db server -- and\nI initiated a \"vacuum analyze verbose\". Normally this process wouldn't take\nmore than 6 hours, but so far we are well over 9 hours. I seem to recall\nreading in one of these pg lists that a db-wide vacuum of a new (?) database\nwould go about setting the hint bits and cause a lot more disk IO than is\nnormal. Is that a possible cause.\n\nPostgresql 8.2.13, RHEL5\n\nSorry, I realize I'm short on details, but I'm just running out the door and\nthe thought about hint bits struck me.\n\nCheers!\n\nHi, I have a question about a db-wide vacuum that I am running that is taking a much longer time than normal. We switched over to our warm standby server today -- which is virtually identical to the source db server -- and I initiated a \"vacuum analyze verbose\". Normally this process wouldn't take more than 6 hours, but so far we are well over 9 hours. I seem to recall reading in one of these pg lists that a db-wide vacuum of a new (?) database would go about setting the hint bits and cause a lot more disk IO than is normal. Is that a possible cause.\nPostgresql 8.2.13, RHEL5Sorry, I realize I'm short on details, but I'm just running out the door and the thought about hint bits struck me.Cheers!", "msg_date": "Thu, 27 Aug 2009 16:02:15 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum duration + hint bits?" }, { "msg_contents": "bricklen <[email protected]> writes:\n> Hi, I have a question about a db-wide vacuum that I am running that is\n> taking a much longer time than normal. We switched over to our warm standby\n> server today -- which is virtually identical to the source db server -- and\n> I initiated a \"vacuum analyze verbose\". Normally this process wouldn't take\n> more than 6 hours, but so far we are well over 9 hours. I seem to recall\n> reading in one of these pg lists that a db-wide vacuum of a new (?) database\n> would go about setting the hint bits and cause a lot more disk IO than is\n> normal. Is that a possible cause.\n\nYeah, it seems possible. You could look at vmstat to see if there's\ntons of write activity ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Aug 2009 19:05:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum duration + hint bits? " }, { "msg_contents": "Yeah, there's a lot. Way more than I am accustomed to seeing from the same\ncommand on the previous server.\n\nOn Thu, Aug 27, 2009 at 4:05 PM, Tom Lane <[email protected]> wrote:\n\n> bricklen <[email protected]> writes:\n> > Hi, I have a question about a db-wide vacuum that I am running that is\n> > taking a much longer time than normal. We switched over to our warm\n> standby\n> > server today -- which is virtually identical to the source db server --\n> and\n> > I initiated a \"vacuum analyze verbose\". Normally this process wouldn't\n> take\n> > more than 6 hours, but so far we are well over 9 hours. I seem to recall\n> > reading in one of these pg lists that a db-wide vacuum of a new (?)\n> database\n> > would go about setting the hint bits and cause a lot more disk IO than is\n> > normal. Is that a possible cause.\n>\n> Yeah, it seems possible. You could look at vmstat to see if there's\n> tons of write activity ...\n>\n> regards, tom lane\n>\n\nYeah, there's a lot. Way more than I am accustomed to seeing from the same command on the previous server.On Thu, Aug 27, 2009 at 4:05 PM, Tom Lane <[email protected]> wrote:\nbricklen <[email protected]> writes:\n\n> Hi, I have a question about a db-wide vacuum that I am running that is\n> taking a much longer time than normal. We switched over to our warm standby\n> server today -- which is virtually identical to the source db server -- and\n> I initiated a \"vacuum analyze verbose\". Normally this process wouldn't take\n> more than 6 hours, but so far we are well over 9 hours. I seem to recall\n> reading in one of these pg lists that a db-wide vacuum of a new (?) database\n> would go about setting the hint bits and cause a lot more disk IO than is\n> normal. Is that a possible cause.\n\nYeah, it seems possible.  You could look at vmstat to see if there's\ntons of write activity ...\n\n                        regards, tom lane", "msg_date": "Thu, 27 Aug 2009 17:19:42 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum duration + hint bits?" } ]
[ { "msg_contents": "If I run \" dd if=/dev/zero bs=1024k of=file count=1000 \" iostat shows me:\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 671.50 88.00 113496.00 176 226992\n\n\nHowever postgres 8.3.7 doing a bulk data write (a slony slave, doing \ninserts and updates) doesn't go nearly as fast:\n\n Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 418.41 648.76 7052.74 1304 14176\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 237.50 44.00 3668.00 88 7336\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 221.50 444.00 3832.00 888 7664\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 166.00 248.00 3360.00 496 6720\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 163.00 480.00 3184.00 960 6368\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 102.50 724.00 1736.00 1448 3472\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 295.50 712.00 6004.00 1424 12008\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 109.45 433.83 2260.70 872 4544\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 180.00 640.00 3512.00 1280 7024\n\ntop shows the cpu usage of the pg process ranges from zero to never more \nthan ten percent of a cpu, and that one cpu is always ninety some odd \npercent in iowait. So what is postgres doing (with fsync off) that \ncauses the cpu to spend so much time in iowait?\n\nThis is a 64 bit amd linux system with ext3 filesystem. free shows:\n\n total used free shared buffers cached\nMem: 8116992 8085848 31144 0 103016 3098568\n-/+ buffers/cache: 4884264 3232728\nSwap: 6697296 2035508 4661788\n", "msg_date": "Fri, 28 Aug 2009 02:56:23 -0400", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "\n> top shows the cpu usage of the pg process ranges from zero to never more \n> than ten percent of a cpu, and that one cpu is always ninety some odd \n> percent in iowait. So what is postgres doing (with fsync off) that \n> causes the cpu to spend so much time in iowait?\n\n\tUpdating indexes ?\n", "msg_date": "Fri, 28 Aug 2009 09:52:33 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Fri, 28 Aug 2009, Joseph S wrote:\n\n> If I run \" dd if=/dev/zero bs=1024k of=file count=1000 \" iostat shows me:\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 671.50 88.00 113496.00 176 226992\n\nThat's the sequential transfer rate of your drive. It's easier to present \nthese numbers if you use \"vmstat 1\" instead; that shows the I/O in more \nuseful units, and with the CPU stats on the same line.\n\n> However postgres 8.3.7 doing a bulk data write (a slony slave, doing inserts \n> and updates) doesn't go nearly as fast:\n\nIn PostgreSQL, an update is:\n\n1) A read of the old data\n2) Writing out the updated data\n3) Marking the original data as dead\n4) Updating any indexes involved\n5) Later cleaning up after the now dead row\n\nOn top of that Slony may need to do its own metadata updates.\n\nThis sort of workload involves random I/O rather than sequential. On \nregular hard drives this normally happens at a tiny fraction of the speed \nbecause of how the disk has to seek around. Typically a single drive \ncapable of 50-100MB/s on sequential I/O will only do 1-2MB/s on a \ncompletely random workload. You look like you're getting somewhere in the \nmiddle there, on the low side which doesn't surprise me.\n\nThe main two things you can do to improve this on the database side:\n\n-Increase checkpoint_segments, which reduces how often updated data has to \nbe flushed to disk\n\n-Increase shared_buffers in order to hold more of the working set of data \nin RAM, so that more reads are satisfied by the database cache and less \ndata gets evicted to disk.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 28 Aug 2009 04:08:15 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE\n ?" }, { "msg_contents": "On Fri, Aug 28, 2009 at 2:08 AM, Greg Smith<[email protected]> wrote:\n>\n> This sort of workload involves random I/O rather than sequential.  On\n> regular hard drives this normally happens at a tiny fraction of the speed\n> because of how the disk has to seek around.  Typically a single drive\n> capable of 50-100MB/s on sequential I/O will only do 1-2MB/s on a completely\n> random workload.  You look like you're getting somewhere in the middle\n> there, on the low side which doesn't surprise me.\n>\n> The main two things you can do to improve this on the database side:\n>\n> -Increase checkpoint_segments, which reduces how often updated data has to\n> be flushed to disk\n>\n> -Increase shared_buffers in order to hold more of the working set of data in\n> RAM, so that more reads are satisfied by the database cache and less data\n> gets evicted to disk.\n\nAfter that you have to start looking at hardware. Soimething as\nsimple as a different drive for indexes and another for WAL, and\nanother for the base tables can make a big difference.\n", "msg_date": "Fri, 28 Aug 2009 02:29:17 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "Greg Smith wrote:\n\n> The main two things you can do to improve this on the database side:\n> \n> -Increase checkpoint_segments, which reduces how often updated data has \n> to be flushed to disk\n\nIt fsync is turned off, does this matter so much?\n> \n", "msg_date": "Fri, 28 Aug 2009 10:25:10 -0400", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE\n ?" }, { "msg_contents": "Scott Marlowe wrote:\n\n> After that you have to start looking at hardware. Soimething as\n> simple as a different drive for indexes and another for WAL, and\n> another for the base tables can make a big difference.\n> \nIf I have 14 drives in a RAID 10 to split between data tables and \nindexes what would be the best way to allocate the drives for performance?\n", "msg_date": "Fri, 28 Aug 2009 10:28:51 -0400", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE\n ?" }, { "msg_contents": "Joseph S Wrote\n> If I have 14 drives in a RAID 10 to split between data tables\n> and indexes what would be the best way to allocate the drives\n> for performance?\n\nRAID-5 can be much faster than RAID-10 for random reads and writes. It is much slower than RAID-10 for sequential writes, but about the same for sequential reads. For typical access patterns, I would put the data and indexes on RAID-5 unless you expect there to be lots of sequential scans.\n\nIf you do this, you can drop the random_page_cost from the default 4.0 to 1.0. That should also encourage postgres to use the index more often. I think the default costs for postgres assume that the data is on a RAID-1 array. Either that, or they are a compromise that isn't quite right for any system. On a plain old disk the random_page_cost should be 8.0 or 10.0.\n\nThe division of the drives into two arrays would depend on how much space will be occupied by the tables vs the indexes. This is very specific to your database. For example, if indexes take half as much space as tables, then you want 2/3rds for tables and 1/3rd for indexes. 8 drives for tables, 5 drives for indexes, and 1 for a hot standby. The smaller array may be a bit slower for some operations due to reduced parallelism. This also depends on the intelligence of your RAID controller.\n\nAlways put the transaction logs (WAL Files) on RAID-10 (or RAID-1 if you don't want to dedicate so many drives to the logs). The only significant performance difference between RAID-10 and RAID-1 is that RAID-1 is much slower (factor of 4 or 5) for random reads. I think the ratio of random reads from the transaction logs would typically be quite low. They are written sequentially and during checkpoint they are read sequentially. In the interim, the data is probably still in shared memory if it needs to be read.\n\nYou don't want your transaction logs or any swapfiles on RAID-5. The slow sequential write performance can be a killer.\n\n-Luke\n", "msg_date": "Sat, 29 Aug 2009 00:20:30 -0400", "msg_from": "Luke Koops <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops<[email protected]> wrote:\n> Joseph S Wrote\n>> If I have 14 drives in a RAID 10 to split between data tables\n>> and indexes what would be the best way to allocate the drives\n>> for performance?\n>\n> RAID-5 can be much faster than RAID-10 for random reads and writes.  It is much slower than RAID-10 for sequential writes, but about the same for sequential reads.  For typical access patterns, I would put the data and indexes on RAID-5 unless you expect there to be lots of sequential scans.\n\nThat's pretty much exactly backwards. RAID-5 will at best slightly\nslower than RAID-0 or RAID-10 for sequential reads or random reads.\nFor sequential writes it performs *terribly*, especially for random\nwrites. The only write pattern where it performs ok sometimes is\nsequential writes of large chunks.\n\n> Always put the transaction logs (WAL Files) on RAID-10 (or RAID-1 if you don't want to dedicate so many drives to the logs).  The only significant performance difference between RAID-10 and RAID-1 is that RAID-1 is much slower (factor of 4 or 5) for random reads.\n\nno, RAID-10 and RAID-1 should perform the same for reads. RAID-10 will\nbe slower at writes by about a factor equal to the number of mirror\nsides.\n\n> I think the ratio of random reads from the transaction logs would typically be quite low.\n\nDuring normal operation the logs are *never* read, neither randomly\nnor sequentially.\n\n> You don't want your transaction logs or any swapfiles on RAID-5.  The slow sequential write performance can be a killer.\n\nAs i mentioned sequential writes are the only write case when RAID-5\nsometimes ok. However the picture is complicated by transaction\nsyncing which would make RAID-5 see it more as random i/o. In any case\nwal normally doesn't take much disk space so there's not much reason\nto use anything but RAID-1.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Sat, 29 Aug 2009 09:46:15 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sat, Aug 29, 2009 at 2:46 AM, Greg Stark<[email protected]> wrote:\n> On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops<[email protected]> wrote:\n>> Joseph S Wrote\n>>> If I have 14 drives in a RAID 10 to split between data tables\n>>> and indexes what would be the best way to allocate the drives\n>>> for performance?\n>>\n>> RAID-5 can be much faster than RAID-10 for random reads and writes.  It is much slower than RAID-10 for sequential writes, but about the same for sequential reads.  For typical access patterns, I would put the data and indexes on RAID-5 unless you expect there to be lots of sequential scans.\n>\n> That's pretty much exactly backwards. RAID-5 will at best slightly\n> slower than RAID-0 or RAID-10 for sequential reads or random reads.\n> For sequential writes it performs *terribly*, especially for random\n> writes. The only write pattern where it performs ok sometimes is\n> sequential writes of large chunks.\n\nNote that while RAID-10 is theoretically always better than RAID-5,\nI've run into quite a few cheapie controllers that were heavily\noptimised for RAID-5 and de-optimised for RAID-10. However, if it's\ngot battery backed cache and can run in JBOD mode, linux software\nRAID-10 or hybrid RAID-1 in hardware RAID-0 in software will almost\nalways beat hardware RAID-5 on the same controller.\n", "msg_date": "Sat, 29 Aug 2009 07:59:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sat, Aug 29, 2009 at 1:46 AM, Greg Stark<[email protected]> wrote:\n> On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops<[email protected]> wrote:\n>> RAID-5 can be much faster than RAID-10 for random reads and writes.  It is much slower than\n>> RAID-10 for sequential writes, but about the same for sequential reads.  For typical access\n>> patterns, I would put the data and indexes on RAID-5 unless you expect there to be lots of\n>> sequential scans.\n>\n> That's pretty much exactly backwards. RAID-5 will at best slightly\n> slower than RAID-0 or RAID-10 for sequential reads or random reads.\n> For sequential writes it performs *terribly*, especially for random\n> writes. The only write pattern where it performs ok sometimes is\n> sequential writes of large chunks.\n\nAlso note that how terribly RAID5 performs on those small random\nwrites depends on a LOT on the implementation. A good controller with\na large BBU cache will be able to mitigate the performance penalty of\nhaving to read stripes before small writes to calculate parity (of\ncourse, if the writes are really random enough, it's still not going\nto help much).\n\n>> Always put the transaction logs (WAL Files) on RAID-10 (or RAID-1 if you don't want to dedicate\n>> so many drives to the logs).  The only significant performance difference between RAID-10 and\n>> RAID-1 is that RAID-1 is much slower (factor of 4 or 5) for random reads.\n>\n> no, RAID-10 and RAID-1 should perform the same for reads. RAID-10 will\n> be slower at writes by about a factor equal to the number of mirror\n> sides.\n\nLet's keep in mind that a 2-disk RAID-10 is really the same as a\n2-disk RAID-1, it just doesn't have any mirrors to stripe over. So\nsince you really need 4-disks for a \"true\" RAID-10, the performance of\na RAID-10 array compared to a RAID1 array is pretty much proportional\nto the number of disks in the array (more disks = more performance).\n\nThe \"far\" RAID-10 layout that is available when using Linux software\nraid is interesting. It will lay the data out on the disks so that\nyou can get the streaming read performance of a RAID-0 array, but\nstreaming write performance will suffer a bit since now the disk will\nhave to seek to perform those writes. You can also use this layout\nwith just 2 disks instead of RAID1. Some claim that the performance\nhit isn't noticeable due to write caching/IO ordering, but I have not\ntested it's performance using PostgreSQL. Might be a nice thing for\nsomeone to try.\n\nhttp://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10\n\n-Dave\n", "msg_date": "Sat, 29 Aug 2009 12:52:07 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sat, Aug 29, 2009 at 9:59 AM, Scott Marlowe<[email protected]> wrote:\n> On Sat, Aug 29, 2009 at 2:46 AM, Greg Stark<[email protected]> wrote:\n>> On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops<[email protected]> wrote:\n>>> Joseph S Wrote\n>>>> If I have 14 drives in a RAID 10 to split between data tables\n>>>> and indexes what would be the best way to allocate the drives\n>>>> for performance?\n>>>\n>>> RAID-5 can be much faster than RAID-10 for random reads and writes.  It is much slower than RAID-10 for sequential writes, but about the same for sequential reads.  For typical access patterns, I would put the data and indexes on RAID-5 unless you expect there to be lots of sequential scans.\n>>\n>> That's pretty much exactly backwards. RAID-5 will at best slightly\n>> slower than RAID-0 or RAID-10 for sequential reads or random reads.\n>> For sequential writes it performs *terribly*, especially for random\n>> writes. The only write pattern where it performs ok sometimes is\n>> sequential writes of large chunks.\n>\n> Note that while RAID-10 is theoretically always better than RAID-5,\n> I've run into quite a few cheapie controllers that were heavily\n> optimised for RAID-5 and de-optimised for RAID-10.  However, if it's\n> got battery backed cache and can run in JBOD mode, linux software\n> RAID-10 or hybrid RAID-1 in hardware RAID-0 in software will almost\n> always beat hardware RAID-5 on the same controller.\n\n\nraid 5 can outperform raid 10 on sequential writes in theory. if you\nare writing 100mb of actual data on, say, a 8 drive array, the raid 10\nsystem has to write 200mb data and the raid 5 system has to write 100\n* (8/7) or about 114mb. Of course, the raid 5 system has to do\nparity, etc.\n\nFor random writes, raid 5 has to write a minimum of two drives, the\ndata being written and parity. Raid 10 also has to write two drives\nminimum. A lot of people think parity is a big deal in terms of raid\n5 performance penalty, but I don't -- relative to the what's going on\nin the drive, xor calculation costs (one of the fastest operations in\ncomputing) are basically zero, and off-lined if you have a hardware\nraid controller.\n\nI bet part of the problem with raid 5 is actually contention. since\nyour write to a stripe can conflict with other writes to a different\nstripe. The other problem with raid 5 that I see is that you don't\nget very much extra protection -- it's pretty scary doing a rebuild\neven with a hot spare (and then you should probably be doing raid 6).\nOn read performance RAID 10 wins all day long because more drives can\nbe involved.\n\nmerlin\n", "msg_date": "Sun, 30 Aug 2009 11:40:01 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sun, Aug 30, 2009 at 4:40 PM, Merlin Moncure<[email protected]> wrote:\n\n> For random writes, raid 5 has to write a minimum of two drives, the\n> data being written and parity.  Raid 10 also has to write two drives\n> minimum.  A lot of people think parity is a big deal in terms of raid\n> 5 performance penalty, but I don't -- relative to the what's going on\n> in the drive, xor calculation costs (one of the fastest operations in\n> computing) are basically zero, and off-lined if you have a hardware\n> raid controller.\n\nThe cost is that in order to calculate the parity block the RAID\ncontroller has to *read* in either the old data block being\noverwritten and the old parity block or all the other data blocks\nwhich participate in that parity block. So every random write becomes\nnot just two writes but two reads + two writes.\n\nIf you're always writing large sequential hunks at a time then this is\nminimized because the RAID controller can just calculate the new\nparity block for the whole new hunk. But if you often just seek to\nrandom places in the file and overwrite 8k at a time then things go\nvery very poorly.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Sun, 30 Aug 2009 16:52:07 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On 08/30/2009 11:40 AM, Merlin Moncure wrote:\n> For random writes, raid 5 has to write a minimum of two drives, the\n> data being written and parity. Raid 10 also has to write two drives\n> minimum. A lot of people think parity is a big deal in terms of raid\n> 5 performance penalty, but I don't -- relative to the what's going on\n> in the drive, xor calculation costs (one of the fastest operations in\n> computing) are basically zero, and off-lined if you have a hardware\n> raid controller.\n>\n> I bet part of the problem with raid 5 is actually contention. since\n> your write to a stripe can conflict with other writes to a different\n> stripe. The other problem with raid 5 that I see is that you don't\n> get very much extra protection -- it's pretty scary doing a rebuild\n> even with a hot spare (and then you should probably be doing raid 6).\n> On read performance RAID 10 wins all day long because more drives can\n> be involved.\n> \n\nIn real life, with real life writes (i.e. not sequential from the start \nof the disk to the end of the disk), where the stripes on the disk being \nwritten are not already in RAM (to allow for XOR to be cheap), RAID 5 is \nhorrible. I still recall naively playing with software RAID 5 on a three \ndisk system and finding write performance to be 20% - 50% less than a \nsingle drive on its own.\n\nPeople need to realize that the cost of maintaining parity is not the \nXOR itself - XOR is cheap - the cost is having knowledge of all drives \nin the stripe in order to write the parity. This implies it is already \nin cache (requires a very large cache, or a very localized load such \nthat the load all fits in cache), or it requires 1 or more reads before \n2 or more writes. Latency is a killer here - latency is already the \nslowest part of the disk, so to effectively multiply latency x 2 has a \nhuge impact.\n\nI will never use RAID 5 again unless I have a huge memory backed cache \nfor it to cache writes against. By huge, I mean something approximately \nthe size of the data normally read and written. Having 1 Gbytes of RAM \ndedicated to RAID 5 for a 1 Tbyte drive may not be enough.\n\nRAID 1+0 on the other hand, has never disappointed me yet. Disks are \ncheap, and paying x2 for single disk redundancy is an acceptable price.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sun, 30 Aug 2009 13:36:19 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE\n ?" }, { "msg_contents": "I've already learned my lesson and will never use raid 5 again. The \nquestion is what I do with my 14 drives. Should I use only 1 pair for \nindexes or should I use 4 drives? The wal logs are already slated for \nan SSD.\n\nScott Marlowe wrote:\n> On Sat, Aug 29, 2009 at 2:46 AM, Greg Stark<[email protected]> wrote:\n>> On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops<[email protected]> wrote:\n>>> Joseph S Wrote\n>>>> If I have 14 drives in a RAID 10 to split between data tables\n>>>> and indexes what would be the best way to allocate the drives\n>>>> for performance?\n>>> RAID-5 can be much faster than RAID-10 for random reads and writes. It is much slower than RAID-10 for sequential writes, but about the same for sequential reads. For typical access patterns, I would put the data and indexes on RAID-5 unless you expect there to be lots of sequential scans.\n>> That's pretty much exactly backwards. RAID-5 will at best slightly\n>> slower than RAID-0 or RAID-10 for sequential reads or random reads.\n>> For sequential writes it performs *terribly*, especially for random\n>> writes. The only write pattern where it performs ok sometimes is\n>> sequential writes of large chunks.\n> \n> Note that while RAID-10 is theoretically always better than RAID-5,\n> I've run into quite a few cheapie controllers that were heavily\n> optimised for RAID-5 and de-optimised for RAID-10. However, if it's\n> got battery backed cache and can run in JBOD mode, linux software\n> RAID-10 or hybrid RAID-1 in hardware RAID-0 in software will almost\n> always beat hardware RAID-5 on the same controller.\n> \n", "msg_date": "Sun, 30 Aug 2009 16:01:49 -0400", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE\n ?" }, { "msg_contents": "On Sun, Aug 30, 2009 at 1:36 PM, Mark Mielke<[email protected]> wrote:\n> On 08/30/2009 11:40 AM, Merlin Moncure wrote:\n>>\n>> For random writes, raid 5 has to write a minimum of two drives, the\n>> data being written and parity.  Raid 10 also has to write two drives\n>> minimum.  A lot of people think parity is a big deal in terms of raid\n>> 5 performance penalty, but I don't -- relative to the what's going on\n>> in the drive, xor calculation costs (one of the fastest operations in\n>> computing) are basically zero, and off-lined if you have a hardware\n>> raid controller.\n>>\n>> I bet part of the problem with raid 5 is actually contention. since\n>> your write to a stripe can conflict with other writes to a different\n>> stripe.  The other problem with raid 5 that I see is that you don't\n>> get very much extra protection -- it's pretty scary doing a rebuild\n>> even with a hot spare (and then you should probably be doing raid 6).\n>> On read performance RAID 10 wins all day long because more drives can\n>> be involved.\n>>\n>\n> In real life, with real life writes (i.e. not sequential from the start of\n> the disk to the end of the disk), where the stripes on the disk being\n> written are not already in RAM (to allow for XOR to be cheap), RAID 5 is\n> horrible. I still recall naively playing with software RAID 5 on a three\n> disk system and finding write performance to be 20% - 50% less than a single\n> drive on its own.\n>\n> People need to realize that the cost of maintaining parity is not the XOR\n> itself - XOR is cheap - the cost is having knowledge of all drives in the\n> stripe in order to write the parity. This implies it is already in cache\n> (requires a very large cache, or a very localized load such that the load\n> all fits in cache), or it requires 1 or more reads before 2 or more writes.\n> Latency is a killer here - latency is already the slowest part of the disk,\n> so to effectively multiply latency x 2 has a huge impact.\n\nThis is not necessarily correct. As long as the data you are writing\nis less than the raid stripe size (say 64kb), then you only need the\nold data for that stripe (which is stored on one disk only), the\nparity (also stored on one disk only), and the data being written to\nrecalculate the parity. A raid stripe is usually on one disk. So a\nraid 5 random write will only involve two drives if it's less than\nstripe size (and three drives if it's up to 2x stripe size, etc).\n\nIOW, if your stripe size is 64k:\n64k written:\n raid 10: two writes\n raid 5: two writes, one read (but the read and one of the writes is\nsame physical location)\n128k written\n raid 10: four writes\n raid 5: three writes, one read (but the read and one of the writes\nis same physical location)\n192k written\n raid 10: six writes\n raid 5: four writes, one read (but the read and one of the writes is\nsame physical location)\n\nnow, by 'same physical' location, that may mean that the drive head\nhas to move if the data is not in cache.\n\nI realize that many raid 5 implementations tend to suck. That said,\nraid 5 should offer higher theoretical performance for writing than\nraid 10, both for sequential and random. (many, many online\ndescriptions of raid get this wrong and stupidly blame the overhead of\nparity calculation). raid 10 wins on read all day long. Of course,\non a typical system with lots of things going on, it gets a lot more\ncomplicated...\n\n(just for the record, I use raid 10 on my databases always) :-)\n\nmerlin\n", "msg_date": "Sun, 30 Aug 2009 18:56:08 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sun, Aug 30, 2009 at 11:56 PM, Merlin Moncure<[email protected]> wrote:\n> 192k written\n>  raid 10: six writes\n>  raid 5: four writes, one read (but the read and one of the writes is\n> same physical location)\n>\n> now, by 'same physical' location, that may mean that the drive head\n> has to move if the data is not in cache.\n>\n> I realize that many raid 5 implementations tend to suck.  That said,\n> raid 5 should offer higher theoretical performance for writing than\n> raid 10, both for sequential and random.\n\nIn the above there are two problems.\n\n1) 192kB is not a random access pattern. Any time you're writing a\nwhole raid stripe or more then RAID5 can start performing reasonably\nbut that's not random, that's sequential i/o. The relevant random i/o\npattern is writing 8kB chunks at random offsets into a multi-terabyte\nstorage which doesn't fit in cache.\n\n2) It's not clear but I think you're saying \"but the read and one of\nthe writes is same physical location\" on the basis that this mitigates\nthe costs. In fact it's the worst case. It means after doing the read\nand calculating the parity block the drive must then spin a full\nrotation before being able to write it back out. So instead of an\naverage latency of 1/2 of a rotation you have that plus a full\nrotation, or 3x as much latency before the write can be performed as\nwithout raid5.\n\nIt's not a fault of the implementations, it's a fundamental problem\nwith RAId5. Even a spectacular implementation of RAID5 will be awful\nfor random access writes. The only saving grace some hardware\nimplementations have is having huge amounts of battery backed cache\nwhich mean that they can usually buffer all the writes for long enough\nthat the access patterns no longer look random. If you buffer enough\nthen you can hope you'll eventually overwrite the whole stripe and can\nwrite out the new parity without reading the old data. Or failing that\nyou can perform the reads of the old data when it's convenient because\nyou're reading nearby data effectively turning it into sequential i/o.\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Mon, 31 Aug 2009 00:38:33 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sun, Aug 30, 2009 at 7:38 PM, Greg Stark<[email protected]> wrote:\n> On Sun, Aug 30, 2009 at 11:56 PM, Merlin Moncure<[email protected]> wrote:\n>> 192k written\n>>  raid 10: six writes\n>>  raid 5: four writes, one read (but the read and one of the writes is\n>> same physical location)\n>>\n>> now, by 'same physical' location, that may mean that the drive head\n>> has to move if the data is not in cache.\n>>\n>> I realize that many raid 5 implementations tend to suck.  That said,\n>> raid 5 should offer higher theoretical performance for writing than\n>> raid 10, both for sequential and random.\n>\n> In the above there are two problems.\n>\n> 1) 192kB is not a random access pattern. Any time you're writing a\n> whole raid stripe or more then RAID5 can start performing reasonably\n> but that's not random, that's sequential i/o. The relevant random i/o\n> pattern is writing 8kB chunks at random offsets into a multi-terabyte\n> storage which doesn't fit in cache.\n>\n> 2) It's not clear but I think you're saying \"but the read and one of\n> the writes is same physical location\" on the basis that this mitigates\n> the costs. In fact it's the worst case. It means after doing the read\n> and calculating the parity block the drive must then spin a full\n> rotation before being able to write it back out. So instead of an\n> average latency of 1/2 of a rotation you have that plus a full\n> rotation, or 3x as much latency before the write can be performed as\n> without raid5.\n>\n> It's not a fault of the implementations, it's a fundamental problem\n> with RAId5. Even a spectacular implementation of RAID5 will be awful\n> for random access writes. The only saving grace some hardware\n> implementations have is having huge amounts of battery backed cache\n> which mean that they can usually buffer all the writes for long enough\n> that the access patterns no longer look random. If you buffer enough\n> then you can hope you'll eventually overwrite the whole stripe and can\n> write out the new parity without reading the old data. Or failing that\n> you can perform the reads of the old data when it's convenient because\n> you're reading nearby data effectively turning it into sequential i/o.\n\nI agree, that's good analysis. The main point I was making was that\nif you have say a 10 disk raid 5, you don't involve 10 disks, only\ntwo...a very common misconception. I made another mistake that you\ndidn't catch: you need to read *both* the data drive and the parity\ndrive before writing, not just the parity drive.\n\nI wonder if flash SSD are a better fit for raid 5 since the reads are\nmuch cheaper than writes and there is no rotational latency. (also,\n$/gb is different, and so are the failure cases).\n\nmerlin\n", "msg_date": "Mon, 31 Aug 2009 10:38:25 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "* Merlin Moncure <[email protected]> [090831 10:38]:\n \n> I agree, that's good analysis. The main point I was making was that\n> if you have say a 10 disk raid 5, you don't involve 10 disks, only\n> two...a very common misconception. I made another mistake that you\n> didn't catch: you need to read *both* the data drive and the parity\n> drive before writing, not just the parity drive.\n> \n> I wonder if flash SSD are a better fit for raid 5 since the reads are\n> much cheaper than writes and there is no rotational latency. (also,\n> $/gb is different, and so are the failure cases).\n\nThe other thing that scares me about raid-5 is the write-hole, and the\npossible delayed inconsistency that brings...\n\nAgain, hopefully mitigated by a dependable controller w/ BBU...\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Mon, 31 Aug 2009 10:48:09 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sun, Aug 30, 2009 at 1:01 PM, Joseph S <[email protected]> wrote:\n\n> I've already learned my lesson and will never use raid 5 again. The\n> question is what I do with my 14 drives. Should I use only 1 pair for\n> indexes or should I use 4 drives? The wal logs are already slated for an\n> SSD.\n>\n\n\n\nWhy not just spread all your index data over 14 spindles, and do the same\nwith your table data? I haven't encountered this debate in in the pgsql\nworld, but from the Oracle world it seems to me the \"Stripe And Mirror\nEverything\" people had the better argument than the \"separate tables and\nindexes\" people.\n\n\nJeff\n\nOn Sun, Aug 30, 2009 at 1:01 PM, Joseph S <[email protected]> wrote:\nI've already learned my lesson and will never use raid 5 again.  The question is what I do with my 14 drives. Should I use only 1 pair for indexes or should I use 4 drives?  The wal logs are already slated for an SSD.\nWhy not just spread all your index data over 14 spindles, and do the same with your table data?  I haven't encountered this debate in in the pgsql world, but from the Oracle world it seems to me the \"Stripe And Mirror Everything\" people had the better argument than the \"separate tables and indexes\" people.\nJeff", "msg_date": "Mon, 31 Aug 2009 08:24:01 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "Jeff Janes <[email protected]> wrote:\n> Joseph S <[email protected]> wrote:\n \n>> The question is what I do with my 14 drives. Should I use only 1\n>> pair for indexes or should I use 4 drives? The wal logs are\n>> already slated for an SSD.\n \n> Why not just spread all your index data over 14 spindles, and do the\n> same with your table data?\n \nIf you have the luxury of being able to test more than one\nconfiguration with something resembling your actual workload, I would\nstrongly recommend including this as one of your configurations.\nSpreading everything over the larger number of spindles might well\nout-perform your most carefully hand-crafted tuning of object\nplacement on smaller spindle sets.\n \n-Kevin\n", "msg_date": "Mon, 31 Aug 2009 11:31:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during\n\t INSERT/UPDATE ?" }, { "msg_contents": "On Mon, Aug 31, 2009 at 10:31 AM, Kevin\nGrittner<[email protected]> wrote:\n> Jeff Janes <[email protected]> wrote:\n>> Joseph S <[email protected]> wrote:\n>\n>>> The question is what I do with my 14 drives. Should I use only 1\n>>> pair for indexes or should I use 4 drives?  The wal logs are\n>>> already slated for an SSD.\n>\n>> Why not just spread all your index data over 14 spindles, and do the\n>> same with your table data?\n>\n> If you have the luxury of being able to test more than one\n> configuration with something resembling your actual workload, I would\n> strongly recommend including this as one of your configurations.\n> Spreading everything over the larger number of spindles might well\n> out-perform your most carefully hand-crafted tuning of object\n> placement on smaller spindle sets.\n\nThe first thing I'd test would be if having a separate mirror set for\npg_xlog helps. If you have a high write environment moving pg_xlog\noff of the main data set can help a lot.\n", "msg_date": "Mon, 31 Aug 2009 13:15:56 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" } ]
[ { "msg_contents": "Dear all,\n\nI am migrating a large PhpBB forum to Drupal. \n\nWhat memory monitoring tool would you recommend? \nI never used one before in PostgreSQL.\n\nThe only figures I read are those from VACUUM VERBOSE ANALYSE to make\nsure I have enough shared memory for indexes.\n\nWhat more complex tools would you recommend to monitor memory usage?\n\nKind regards,\nJean-Michel", "msg_date": "Fri, 28 Aug 2009 23:02:57 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": true, "msg_subject": "Memory monitoring tool" }, { "msg_contents": "\n> What more complex tools would you recommend to monitor memory usage?\n\nvmstat, and sar on Linux. Dtrace on BSD or Solaris, if you're ready to\nwrite some d-script. pg_top for PG memory usage interactively.\n\nSee performance-whack-a-mole: www.pgcon.org/2009/schedule/events/188.en.html\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 28 Aug 2009 16:16:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory monitoring tool" } ]
[ { "msg_contents": ">\n> ---------- Forwarded message ----------\n> From: Joseph S <[email protected]>\n> To: Greg Smith <[email protected]>, [email protected]\n> Date: Fri, 28 Aug 2009 10:25:10 -0400\n> Subject: Re: What exactly is postgres doing during INSERT/UPDATE ?\n> Greg Smith wrote:\n>\n> The main two things you can do to improve this on the database side:\n>>\n>> -Increase checkpoint_segments, which reduces how often updated data has to\n>> be flushed to disk\n>>\n>\n> It fsync is turned off, does this matter so much?\n\n\nIt still matters. The kernel is only willing to have so much dirty data\nsitting in the disk cache. Once it reaches that limit, user processes doing\nwrites start blocking while the kernel flushes stuff on their behalf.\n\nJeff\n\n---------- Forwarded message ----------From: Joseph S <[email protected]>\nTo: Greg Smith <[email protected]>, [email protected]: Fri, 28 Aug 2009 10:25:10 -0400Subject: Re: What exactly is postgres doing during INSERT/UPDATE ?\nGreg Smith wrote:\n\n\nThe main two things you can do to improve this on the database side:\n\n-Increase checkpoint_segments, which reduces how often updated data has to be flushed to disk\n\n\nIt fsync is turned off, does this matter so much?It still matters.  The kernel is only willing to have so much dirty data sitting in the disk cache.  Once it reaches that limit, user processes doing writes start blocking while the kernel flushes stuff on their behalf. \nJeff", "msg_date": "Fri, 28 Aug 2009 17:19:17 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Fri, Aug 28, 2009 at 8:19 PM, Jeff Janes<[email protected]> wrote:\n>> ---------- Forwarded message ----------\n>> From: Joseph S <[email protected]>\n>> To: Greg Smith <[email protected]>, [email protected]\n>> Date: Fri, 28 Aug 2009 10:25:10 -0400\n>> Subject: Re: What exactly is postgres doing during INSERT/UPDATE ?\n>> Greg Smith wrote:\n>>\n>>> The main two things you can do to improve this on the database side:\n>>>\n>>> -Increase checkpoint_segments, which reduces how often updated data has\n>>> to be flushed to disk\n>>\n>> It fsync is turned off, does this matter so much?\n>\n> It still matters.  The kernel is only willing to have so much dirty data\n> sitting in the disk cache.  Once it reaches that limit, user processes doing\n> writes start blocking while the kernel flushes stuff on their behalf.\n\nit doesn't matter nearly as much though. if you are outrunning the\no/s write cache with fsync off, then it's time to start looking at new\nhardware.\n\nmerlin\n", "msg_date": "Sat, 29 Aug 2009 09:26:27 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" }, { "msg_contents": "On Sat, Aug 29, 2009 at 6:26 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Aug 28, 2009 at 8:19 PM, Jeff Janes<[email protected]> wrote:\n> >> ---------- Forwarded message ----------\n> >> From: Joseph S <[email protected]>\n> >> To: Greg Smith <[email protected]>, [email protected]\n> >> Date: Fri, 28 Aug 2009 10:25:10 -0400\n> >> Subject: Re: What exactly is postgres doing during INSERT/UPDATE ?\n> >> Greg Smith wrote:\n> >>\n> >>> The main two things you can do to improve this on the database side:\n> >>>\n> >>> -Increase checkpoint_segments, which reduces how often updated data has\n> >>> to be flushed to disk\n> >>\n> >> It fsync is turned off, does this matter so much?\n> >\n> > It still matters. The kernel is only willing to have so much dirty data\n> > sitting in the disk cache. Once it reaches that limit, user processes\n> doing\n> > writes start blocking while the kernel flushes stuff on their behalf.\n>\n> it doesn't matter nearly as much though.\n\n\nTrue, but it matters enough that it ought not be ignored. I've run into it\nmore than once, and I haven't been at this very long.\n\n\n> if you are outrunning the\n> o/s write cache with fsync off, then it's time to start looking at new\n> hardware.\n\n\nOr to start looking at tweaking the kernel VM settings. The kernel doesn't\nalways handle these situations as gracefully as it could, and might produce\na practical throughput that is much less than the theoretical one. But\nreducing the frequency of checkpoints is easier than either of these, and\ncheaper than buying new hardware. I don't see why the hardest and most\nexpensive option would be the first choice.\n\n Jeff\n\nOn Sat, Aug 29, 2009 at 6:26 AM, Merlin Moncure <[email protected]> wrote:\nOn Fri, Aug 28, 2009 at 8:19 PM, Jeff Janes<[email protected]> wrote:\n>> ---------- Forwarded message ----------\n>> From: Joseph S <[email protected]>\n>> To: Greg Smith <[email protected]>, [email protected]\n>> Date: Fri, 28 Aug 2009 10:25:10 -0400\n>> Subject: Re: What exactly is postgres doing during INSERT/UPDATE ?\n>> Greg Smith wrote:\n>>\n>>> The main two things you can do to improve this on the database side:\n>>>\n>>> -Increase checkpoint_segments, which reduces how often updated data has\n>>> to be flushed to disk\n>>\n>> It fsync is turned off, does this matter so much?\n>\n> It still matters.  The kernel is only willing to have so much dirty data\n> sitting in the disk cache.  Once it reaches that limit, user processes doing\n> writes start blocking while the kernel flushes stuff on their behalf.\n\nit doesn't matter nearly as much though.  True, but it matters enough that it ought not be ignored.  I've run into it more than once, and I haven't been at this very long.\n if you are outrunning the\no/s write cache with fsync off, then it's time to start looking at new\nhardware.Or to start looking at tweaking the kernel VM settings.  The kernel doesn't always handle these situations as gracefully as it could, and might produce a practical throughput that is much less than the theoretical one.  But reducing the frequency of checkpoints is easier than either of these, and cheaper than buying new hardware.  I don't see why the hardest and most expensive option would be the first choice.\n Jeff", "msg_date": "Sat, 29 Aug 2009 15:01:03 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What exactly is postgres doing during INSERT/UPDATE ?" } ]
[ { "msg_contents": "Hi all;\n\nWe have a table that's > 2billion rows big and growing fast. We've setup \nmonthly partitions for it. Upon running the first of many select * from \nbigTable insert into partition statements (330million rows per month) the \nentire box eventually goes out to lunch.\n\nAny thoughts/suggestions?\n\nThanks in advance \n", "msg_date": "Tue, 1 Sep 2009 02:45:58 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "moving data between tables causes the db to overwhelm the system" }, { "msg_contents": "> Hi all;\n>\n> We have a table that's > 2billion rows big and growing fast. We've setup\n> monthly partitions for it. Upon running the first of many select * from\n> bigTable insert into partition statements (330million rows per month) the\n> entire box eventually goes out to lunch.\n>\n> Any thoughts/suggestions?\n>\n> Thanks in advance\n>\n\nSorry, but your post does not provide enough information, so it's\npractically impossible to give you some suggestions :-(\n\nProvide at least these information:\n\n1) basic info about the hardware (number and type of cpus, amount of RAM,\ncontroller, number of disk drives)\n\n2) more detailed information of the table size and structure (see the\npg_class and pg_stat_* views). Information about indexes and triggers\ncreated on the table\n\n3) explain plan of the problematic queries - in this case the 'select *\nfrom bigtable' etc.\n\n4) detailed description what 'going to lunch' means - does that mean the\nCPU is 100% occupied, or is there a problem with I/O (use vmstat / dstat\nor something like that)\n\nI've probably forgot something, but this might be a good starting point.\n\nregards\nTomas\n\n", "msg_date": "Tue, 1 Sep 2009 10:58:06 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: moving data between tables causes the db to overwhelm the system" }, { "msg_contents": "\n> We have a table that's > 2billion rows big and growing fast. We've setup\n> monthly partitions for it. Upon running the first of many select * from\n> bigTable insert into partition statements (330million rows per month) the\n> entire box eventually goes out to lunch.\n>\n> Any thoughts/suggestions?\n>\n> Thanks in advance\n\n\tDid you create the indexes on the partition before or after inserting the \n330M rows into it ?\n\tWhat is your hardware config, where is xlog ?\n\n", "msg_date": "Tue, 01 Sep 2009 11:26:08 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: moving data between tables causes the db to overwhelm the system" }, { "msg_contents": "On Tuesday 01 September 2009 03:26:08 Pierre Frédéric Caillaud wrote:\n> > We have a table that's > 2billion rows big and growing fast. We've setup\n> > monthly partitions for it. Upon running the first of many select * from\n> > bigTable insert into partition statements (330million rows per month) the\n> > entire box eventually goes out to lunch.\n> >\n> > Any thoughts/suggestions?\n> >\n> > Thanks in advance\n>\n> \tDid you create the indexes on the partition before or after inserting the\n> 330M rows into it ?\n> \tWhat is your hardware config, where is xlog ?\n\n\nIndexes are on the partitions, my bad. Also HW is a Dell server with 2 quad \ncores and 32G of ram\n\nwe have a DELL MD3000 disk array with an MD1000 expansion bay, 2 controllers, \n2 hbs's/mount points runing RAID 10\n\nThe explain plan looks like this:\nexplain SELECT * from bigTable\nwhere\n\"time\" >= extract ('epoch' from timestamp '2009-08-31 00:00:00')::int4\nand \"time\" <= extract ('epoch' from timestamp '2009-08-31 23:59:59')::int\n;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Index Scan using bigTable_time_index on bigTable (cost=0.00..184.04 rows=1 \nwidth=129)\n Index Cond: ((\"time\" >= 1251676800) AND (\"time\" <= 1251763199))\n(2 rows)\n\n", "msg_date": "Tue, 1 Sep 2009 03:32:32 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: moving data between tables causes the db to overwhelm the system" }, { "msg_contents": "> On Tuesday 01 September 2009 03:26:08 Pierre FrĂŠdĂŠric Caillaud wrote:\n>> > We have a table that's > 2billion rows big and growing fast. We've\n>> setup\n>> > monthly partitions for it. Upon running the first of many select *\n>> from\n>> > bigTable insert into partition statements (330million rows per month)\n>> the\n>> > entire box eventually goes out to lunch.\n>> >\n>> > Any thoughts/suggestions?\n>> >\n>> > Thanks in advance\n>>\n>> \tDid you create the indexes on the partition before or after inserting\n>> the\n>> 330M rows into it ?\n>> \tWhat is your hardware config, where is xlog ?\n>\n>\n> Indexes are on the partitions, my bad. Also HW is a Dell server with 2\n> quad\n> cores and 32G of ram\n>\n> we have a DELL MD3000 disk array with an MD1000 expansion bay, 2\n> controllers,\n> 2 hbs's/mount points runing RAID 10\n>\n> The explain plan looks like this:\n> explain SELECT * from bigTable\n> where\n> \"time\" >= extract ('epoch' from timestamp '2009-08-31 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2009-08-31 23:59:59')::int\n> ;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------\n> Index Scan using bigTable_time_index on bigTable (cost=0.00..184.04\n> rows=1\n> width=129)\n> Index Cond: ((\"time\" >= 1251676800) AND (\"time\" <= 1251763199))\n> (2 rows)\n\nThis looks like a single row matches your conditions. Have you run ANALYZE\non the table recently? Try to run \"ANALYZE BigTable\" and then the explain\nagain.\n\nBTW what version of PostgreSQL are you running?\n\nTomas\n\n", "msg_date": "Tue, 1 Sep 2009 11:54:27 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: moving data between tables causes the db to overwhelm the system" }, { "msg_contents": "\n> Indexes are on the partitions, my bad.\n\nIf you need to insert lots of data, it is faster to create the indexes \nafterwards (and then you can also create them in parallel, since you have \nlots of RAM and cores).\n\n> The explain plan looks like this:\n> explain SELECT * from bigTable\n> where\n> \"time\" >= extract ('epoch' from timestamp '2009-08-31 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2009-08-31 23:59:59')::int\n> ;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------\n> Index Scan using bigTable_time_index on bigTable (cost=0.00..184.04 \n> rows=1\n> width=129)\n> Index Cond: ((\"time\" >= 1251676800) AND (\"time\" <= 1251763199))\n> (2 rows)\n\nWhat is slow, then, is it the insert or is it the select ?\nCan you EXPLAIN ANALYZE the SELECT ?\n\nIf \"bigTable\" is not clustered on \"time\" you'll get lots of random \naccesses, it'll be slow.\n\nIf you want to partition your huge data set by \"time\", and the data isn't \nalready ordered by \"time\" on disk, you could do this :\n\nSET work_mem TO something very large like 10GB since you got 32GB RAM, \ncheck your shared buffers etc first;\nCREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY \"time\"; <- huge sort, \nwill take some time\n\nSET maintenance_work_mem TO something very large;\nCREATE INDEX tmp_time ON tmp( \"time\" );\n\nCREATE TABLE partition1 AS SELECT * FROM tmp WHERE \"time\" BETWEEN \nbeginning AND end;\n(repeat...)\n\nSince tmp is clustered on \"time\" you'll get a nice fast bitmap-scan, and \nyou won't need to seq-scan N times (or randomly index-scan) bigTable.\n", "msg_date": "Tue, 01 Sep 2009 13:09:05 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: moving data between tables causes the db to overwhelm the system" }, { "msg_contents": "\n> If you want to partition your huge data set by \"time\", and the data \n> isn't already ordered by \"time\" on disk, you could do this :\n>\n> SET work_mem TO something very large like 10GB since you got 32GB RAM, \n> check your shared buffers etc first;\n> CREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY \"time\"; <- huge \n> sort, will take some time\n>\n> SET maintenance_work_mem TO something very large;\n> CREATE INDEX tmp_time ON tmp( \"time\" );\n>\n> CREATE TABLE partition1 AS SELECT * FROM tmp WHERE \"time\" BETWEEN \n> beginning AND end;\n> (repeat...)\n>\n> Since tmp is clustered on \"time\" you'll get a nice fast bitmap-scan, \n> and you won't need to seq-scan N times (or randomly index-scan) bigTable.\n>\nI went through the same exercise a couple months ago with a table that \nhad ~1.7 billion rows. I used a similar approach to what's described \nabove but in my case I didn't create the tmp table and did the ORDER BY \nwhen I did each select on the bigTable to do the insert (I didn't have \nmany of them). My data was structured such that this was easier than \ndoing the huge sort. In any event, it worked great and my smaller \npartitions are much much faster.\n\nBob\n", "msg_date": "Tue, 01 Sep 2009 07:33:10 -0500", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: moving data between tables causes the db to overwhelm the system" } ]
[ { "msg_contents": "Hi:\n\nLooks like after postgres db server reboot, first query is very slow\n(10+mins). After the system cache built, query is pretty fast.\nNow the question is how to speed up the first query slow issue?\n\nAny pointers?\n\nThanks\nwei\n\nHi:Looks like after postgres db server reboot, first query is very slow (10+mins). After the system cache built, query is pretty fast.Now the question is how to speed up the first query slow issue?Any pointers?\nThankswei", "msg_date": "Tue, 1 Sep 2009 15:01:41 -0700", "msg_from": "Wei Yan <[email protected]>", "msg_from_op": true, "msg_subject": "Help: how to speed up query after db server reboot" }, { "msg_contents": "On Wed, Sep 2, 2009 at 00:01, Wei Yan<[email protected]> wrote:\n> Hi:\n>\n> Looks like after postgres db server reboot, first query is very slow\n> (10+mins). After the system cache built, query is pretty fast.\n> Now the question is how to speed up the first query slow issue?\n>\n> Any pointers?\n\nSchedule a run of a couple of representative queries right as the\ndatabase has started? That should pre-populate the cache before your\nusers get there, hopefully.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Thu, 3 Sep 2009 09:51:49 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help: how to speed up query after db server reboot" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Wed, Sep 2, 2009 at 00:01, Wei Yan<[email protected]> wrote:\n>> Looks like after postgres db server reboot, first query is very slow\n>> (10+mins). After the system cache built, query is pretty fast.\n>> Now the question is how to speed up the first query slow issue?\n\n> Schedule a run of a couple of representative queries right as the\n> database has started? That should pre-populate the cache before your\n> users get there, hopefully.\n\nI wonder if VACUUMing his key tables would be a good answer. I bet that\na lot of the problem is swapping in indexes in a slow random-access\nfashion. In recent-model Postgres, VACUUM will do a sequential scan of\nthe indexes (at least for btree) which should be a much more efficient\nway of bringing them into kernel cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Sep 2009 10:32:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help: how to speed up query after db server reboot " } ]
[ { "msg_contents": "Hi all;\n\nI cant figure out why we're scanning all of our partitions.\n\nWe setup our tables like this:\n\n\nBase Table:\n\nCREATE TABLE url_hits (\n id integer NOT NULL,\n content_type_id integer,\n file_extension_id integer,\n \"time\" integer,\n bytes integer NOT NULL,\n path_id integer,\n protocol public.protocol_enum\n);\n\nPartitions:\ncreate table url_hits_2011_12 (\n check (\n \"time\" >= extract ('epoch' from timestamp '2011-12-01 \n00:00:00')::int4\n and \"time\" <= extract ('epoch' from timestamp '2011-12-31 \n23:59:59')::int4\n )\n) INHERITS (url_hits);\n\n\nCREATE RULE url_hits_2011_12_insert as\nON INSERT TO url_hits\nwhere\n ( \"time\" >= extract ('epoch' from timestamp '2011-12-01 00:00:00')::int4\n and \"time\" <= extract ('epoch' from timestamp '2011-12-31 \n23:59:59')::int4 )\nDO INSTEAD\n INSERT INTO url_hits_2011_12 VALUES (NEW.*) ;\n\n...\n\ncreate table url_hits_2009_08 (\n check (\n \"time\" >= extract ('epoch' from timestamp '2009-08-01 \n00:00:00')::int4\n and \"time\" <= extract ('epoch' from timestamp '2009-08-31 \n23:59:59')::int4\n )\n) INHERITS (url_hits);\n\n\nCREATE RULE url_hits_2009_08_insert as\nON INSERT TO url_hits\nwhere\n ( \"time\" >= extract ('epoch' from timestamp '2009-08-01 00:00:00')::int4\n and \"time\" <= extract ('epoch' from timestamp '2009-08-31 \n23:59:59')::int4 )\nDO INSTEAD\n INSERT INTO url_hits_2009_08 VALUES (NEW.*) ;\n\n... \n\nthe explain plan shows most any query scans/hits all partitions even if we \nspecify the partition key:\n\nexplain select * from pwreport.url_hits where \"time\" > \ndate_part('epoch'::text, '2009-08-12'::timestamp without time zone)::integer; \n QUERY PLAN \n------------------------------------------------------------------------------------------------------ \n Result (cost=0.00..23766294.06 rows=816492723 width=432) \n -> Append (cost=0.00..23766294.06 rows=816492723 width=432) \n -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2009_09 url_hits (cost=0.00..1838010.76 \nrows=75607779 width=128) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..21927943.80 \nrows=740883348 width=131) \n Filter: (\"time\" > 1250035200) \n(62 rows) \n\n\n\nexplain select * from pwreport.url_hits where \"time\" > 1220227200::int4; \n QUERY PLAN \n------------------------------------------------------------------------------------------------------ \n Result (cost=0.00..23775893.12 rows=965053504 width=432) \n -> Append (cost=0.00..23775893.12 rows=965053504 width=432) \n -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200) \n -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_09 url_hits (cost=0.00..1847476.45 \nrows=75997156 width=128)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2008_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2008_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2008_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2008_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1220227200)\n -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..21927943.80 \nrows=889054125 width=131)\n Filter: (\"time\" > 1220227200)\n(84 rows)\n\n\n\nAnyone have any thoughts why we're scanning all partitions?\n\nWe do have constraint_exclusion on:\n\n# show constraint_exclusion;\n constraint_exclusion\n----------------------\n on\n(1 row)\n\n\nThanks in advance...\n", "msg_date": "Wed, 2 Sep 2009 08:52:30 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "The planner does not yet work as efficiently as it could\nwith child tables. Check the recent mail archives for a\nlong discussion of the same.\n\nRegards,\nKen\n\nOn Wed, Sep 02, 2009 at 08:52:30AM -0600, Kevin Kempter wrote:\n> Hi all;\n> \n> I cant figure out why we're scanning all of our partitions.\n> \n> We setup our tables like this:\n> \n> \n> Base Table:\n> \n> CREATE TABLE url_hits (\n> id integer NOT NULL,\n> content_type_id integer,\n> file_extension_id integer,\n> \"time\" integer,\n> bytes integer NOT NULL,\n> path_id integer,\n> protocol public.protocol_enum\n> );\n> \n> Partitions:\n> create table url_hits_2011_12 (\n> check (\n> \"time\" >= extract ('epoch' from timestamp '2011-12-01 \n> 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2011-12-31 \n> 23:59:59')::int4\n> )\n> ) INHERITS (url_hits);\n> \n> \n> CREATE RULE url_hits_2011_12_insert as\n> ON INSERT TO url_hits\n> where\n> ( \"time\" >= extract ('epoch' from timestamp '2011-12-01 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2011-12-31 \n> 23:59:59')::int4 )\n> DO INSTEAD\n> INSERT INTO url_hits_2011_12 VALUES (NEW.*) ;\n> \n> ...\n> \n> create table url_hits_2009_08 (\n> check (\n> \"time\" >= extract ('epoch' from timestamp '2009-08-01 \n> 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2009-08-31 \n> 23:59:59')::int4\n> )\n> ) INHERITS (url_hits);\n> \n> \n> CREATE RULE url_hits_2009_08_insert as\n> ON INSERT TO url_hits\n> where\n> ( \"time\" >= extract ('epoch' from timestamp '2009-08-01 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2009-08-31 \n> 23:59:59')::int4 )\n> DO INSTEAD\n> INSERT INTO url_hits_2009_08 VALUES (NEW.*) ;\n> \n> ... \n> \n> the explain plan shows most any query scans/hits all partitions even if we \n> specify the partition key:\n> \n> explain select * from pwreport.url_hits where \"time\" > \n> date_part('epoch'::text, '2009-08-12'::timestamp without time zone)::integer; \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------ \n> Result (cost=0.00..23766294.06 rows=816492723 width=432) \n> -> Append (cost=0.00..23766294.06 rows=816492723 width=432) \n> -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2009_09 url_hits (cost=0.00..1838010.76 \n> rows=75607779 width=128) \n> Filter: (\"time\" > 1250035200) \n> -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..21927943.80 \n> rows=740883348 width=131) \n> Filter: (\"time\" > 1250035200) \n> (62 rows) \n> \n> \n> \n> explain select * from pwreport.url_hits where \"time\" > 1220227200::int4; \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------ \n> Result (cost=0.00..23775893.12 rows=965053504 width=432) \n> -> Append (cost=0.00..23775893.12 rows=965053504 width=432) \n> -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200) \n> -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432) \n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_09 url_hits (cost=0.00..1847476.45 \n> rows=75997156 width=128)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_07 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_06 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_05 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_04 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_03 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_02 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_01 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2008_12 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2008_11 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2008_10 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2008_09 url_hits (cost=0.00..12.12 rows=57 \n> width=432)\n> Filter: (\"time\" > 1220227200)\n> -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..21927943.80 \n> rows=889054125 width=131)\n> Filter: (\"time\" > 1220227200)\n> (84 rows)\n> \n> \n> \n> Anyone have any thoughts why we're scanning all partitions?\n> \n> We do have constraint_exclusion on:\n> \n> # show constraint_exclusion;\n> constraint_exclusion\n> ----------------------\n> on\n> (1 row)\n> \n> \n> Thanks in advance...\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n", "msg_date": "Wed, 2 Sep 2009 09:55:38 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter<[email protected]> wrote:\n> Hi all;\n>\n> I cant figure out why we're scanning all of our partitions.\n>\n> We setup our tables like this:\n>\n>\n> Base Table:\n>\n> CREATE TABLE url_hits (\n>    id integer NOT NULL,\n>    content_type_id integer,\n>    file_extension_id integer,\n>    \"time\" integer,\n>    bytes integer NOT NULL,\n>    path_id integer,\n>    protocol public.protocol_enum\n> );\n>\n> Partitions:\n> create table url_hits_2011_12 (\n>   check (\n>          \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> 00:00:00')::int4\n>          and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> 23:59:59')::int4\n>   )\n> ) INHERITS (url_hits);\n>\n>\n> CREATE RULE url_hits_2011_12_insert as\n> ON INSERT TO url_hits\n> where\n>   ( \"time\" >= extract ('epoch' from timestamp '2011-12-01 00:00:00')::int4\n>     and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> 23:59:59')::int4 )\n> DO INSTEAD\n>  INSERT INTO  url_hits_2011_12 VALUES (NEW.*) ;\n>\n> ...\n>\n> create table url_hits_2009_08 (\n>   check (\n>          \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> 00:00:00')::int4\n>          and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> 23:59:59')::int4\n>   )\n> ) INHERITS (url_hits);\n>\n>\n> CREATE RULE url_hits_2009_08_insert as\n> ON INSERT TO url_hits\n> where\n>   ( \"time\" >= extract ('epoch' from timestamp '2009-08-01 00:00:00')::int4\n>     and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> 23:59:59')::int4 )\n> DO INSTEAD\n>  INSERT INTO  url_hits_2009_08 VALUES (NEW.*) ;\n>\n> ...\n>\n> the explain plan shows most any query scans/hits all partitions even if we\n> specify the partition key:\n>\n> explain select * from pwreport.url_hits where \"time\" >\n> date_part('epoch'::text, '2009-08-12'::timestamp without time zone)::integer;\n\nHave you tried using extract here instead of date_part ?\n", "msg_date": "Wed, 2 Sep 2009 09:02:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "On Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:\n> On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter<[email protected]> \nwrote:\n> > Hi all;\n> >\n> > I cant figure out why we're scanning all of our partitions.\n> >\n> > We setup our tables like this:\n> >\n> >\n> > Base Table:\n> >\n> > CREATE TABLE url_hits (\n> > id integer NOT NULL,\n> > content_type_id integer,\n> > file_extension_id integer,\n> > \"time\" integer,\n> > bytes integer NOT NULL,\n> > path_id integer,\n> > protocol public.protocol_enum\n> > );\n> >\n> > Partitions:\n> > create table url_hits_2011_12 (\n> > check (\n> > \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > 00:00:00')::int4\n> > and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> > 23:59:59')::int4\n> > )\n> > ) INHERITS (url_hits);\n> >\n> >\n> > CREATE RULE url_hits_2011_12_insert as\n> > ON INSERT TO url_hits\n> > where\n> > ( \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > '2011-12-31\n> > 23:59:59')::int4 )\n> > DO INSTEAD\n> > INSERT INTO url_hits_2011_12 VALUES (NEW.*) ;\n> >\n> > ...\n> >\n> > create table url_hits_2009_08 (\n> > check (\n> > \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > 00:00:00')::int4\n> > and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> > 23:59:59')::int4\n> > )\n> > ) INHERITS (url_hits);\n> >\n> >\n> > CREATE RULE url_hits_2009_08_insert as\n> > ON INSERT TO url_hits\n> > where\n> > ( \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > '2009-08-31\n> > 23:59:59')::int4 )\n> > DO INSTEAD\n> > INSERT INTO url_hits_2009_08 VALUES (NEW.*) ;\n> >\n> > ...\n> >\n> > the explain plan shows most any query scans/hits all partitions even if\n> > we specify the partition key:\n> >\n> > explain select * from pwreport.url_hits where \"time\" >\n> > date_part('epoch'::text, '2009-08-12'::timestamp without time\n> > zone)::integer;\n>\n> Have you tried using extract here instead of date_part ?\n\n\nYes, same results:\n\nexplain select * from pwreport.url_hits where \"time\" > extract('epoch' from \ntimestamp '2009-08-12 00:00:00')::int4; \n QUERY PLAN \n------------------------------------------------------------------------------------------------------ \n Result (cost=0.00..23785180.16 rows=817269615 width=432) \n -> Append (cost=0.00..23785180.16 rows=817269615 width=432) \n -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432) \n Filter: (\"time\" > 1250035200) \n -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2009_09 url_hits (cost=0.00..1856896.86 \nrows=76384671 width=128)\n Filter: (\"time\" > 1250035200)\n -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..21927943.80 \nrows=740883348 width=131)\n Filter: (\"time\" > 1250035200)\n(62 rows)\n\n", "msg_date": "Wed, 2 Sep 2009 09:05:18 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "On Wednesday 02 September 2009 08:55:38 Kenneth Marshall wrote:\n> The planner does not yet work as efficiently as it could\n> with child tables. Check the recent mail archives for a\n> long discussion of the same.\n>\n> Regards,\n> Ken\n>\n> On Wed, Sep 02, 2009 at 08:52:30AM -0600, Kevin Kempter wrote:\n> > Hi all;\n> >\n> > I cant figure out why we're scanning all of our partitions.\n> >\n> > We setup our tables like this:\n> >\n> >\n> > Base Table:\n> >\n> > CREATE TABLE url_hits (\n> > id integer NOT NULL,\n> > content_type_id integer,\n> > file_extension_id integer,\n> > \"time\" integer,\n> > bytes integer NOT NULL,\n> > path_id integer,\n> > protocol public.protocol_enum\n> > );\n> >\n> > Partitions:\n> > create table url_hits_2011_12 (\n> > check (\n> > \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > 00:00:00')::int4\n> > and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> > 23:59:59')::int4\n> > )\n> > ) INHERITS (url_hits);\n> >\n> >\n> > CREATE RULE url_hits_2011_12_insert as\n> > ON INSERT TO url_hits\n> > where\n> > ( \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > '2011-12-31\n> > 23:59:59')::int4 )\n> > DO INSTEAD\n> > INSERT INTO url_hits_2011_12 VALUES (NEW.*) ;\n> >\n> > ...\n> >\n> > create table url_hits_2009_08 (\n> > check (\n> > \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > 00:00:00')::int4\n> > and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> > 23:59:59')::int4\n> > )\n> > ) INHERITS (url_hits);\n> >\n> >\n> > CREATE RULE url_hits_2009_08_insert as\n> > ON INSERT TO url_hits\n> > where\n> > ( \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > '2009-08-31\n> > 23:59:59')::int4 )\n> > DO INSTEAD\n> > INSERT INTO url_hits_2009_08 VALUES (NEW.*) ;\n> >\n> > ...\n> >\n> > the explain plan shows most any query scans/hits all partitions even if\n> > we specify the partition key:\n> >\n> > explain select * from pwreport.url_hits where \"time\" >\n> > date_part('epoch'::text, '2009-08-12'::timestamp without time\n> > zone)::integer; QUERY PLAN\n> > -------------------------------------------------------------------------\n> >----------------------------- Result (cost=0.00..23766294.06\n> > rows=816492723 width=432)\n> > -> Append (cost=0.00..23766294.06 rows=816492723 width=432)\n> > -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2009_09 url_hits \n> > (cost=0.00..1838010.76 rows=75607779 width=128)\n> > Filter: (\"time\" > 1250035200)\n> > -> Seq Scan on url_hits_2009_08 url_hits \n> > (cost=0.00..21927943.80 rows=740883348 width=131)\n> > Filter: (\"time\" > 1250035200)\n> > (62 rows)\n> >\n> >\n> >\n> > explain select * from pwreport.url_hits where \"time\" > 1220227200::int4;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> >----------------------------- Result (cost=0.00..23775893.12\n> > rows=965053504 width=432)\n> > -> Append (cost=0.00..23775893.12 rows=965053504 width=432)\n> > -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_09 url_hits \n> > (cost=0.00..1847476.45 rows=75997156 width=128)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_07 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_06 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_05 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_04 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_03 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_02 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_01 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2008_12 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2008_11 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2008_10 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2008_09 url_hits (cost=0.00..12.12\n> > rows=57 width=432)\n> > Filter: (\"time\" > 1220227200)\n> > -> Seq Scan on url_hits_2009_08 url_hits \n> > (cost=0.00..21927943.80 rows=889054125 width=131)\n> > Filter: (\"time\" > 1220227200)\n> > (84 rows)\n> >\n> >\n> >\n> > Anyone have any thoughts why we're scanning all partitions?\n> >\n> > We do have constraint_exclusion on:\n> >\n> > # show constraint_exclusion;\n> > constraint_exclusion\n> > ----------------------\n> > on\n> > (1 row)\n> >\n> >\n> > Thanks in advance...\n\ncan you point me to the thread, or what the subject line was?\n", "msg_date": "Wed, 2 Sep 2009 09:05:56 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "On Wed, Sep 2, 2009 at 8:05 AM, Kevin Kempter <[email protected]>wrote:\n\n>\n> > > the explain plan shows most any query scans/hits all partitions even if\n> > > we specify the partition key:\n> > >\n> > > explain select * from pwreport.url_hits where \"time\" >\n> > > date_part('epoch'::text, '2009-08-12'::timestamp without time\n> > > zone)::integer; QUERY PLAN\n>\n>\nDoes the plan change if you use a hard-coded timestamp in your query?\n\nOn Wed, Sep 2, 2009 at 8:05 AM, Kevin Kempter <[email protected]> wrote:\n\n> > the explain plan shows most any query scans/hits all partitions even if\n> > we specify the partition key:\n> >\n> > explain select * from pwreport.url_hits where \"time\" >\n> > date_part('epoch'::text, '2009-08-12'::timestamp without time\n> > zone)::integer; QUERY PLAN\nDoes the plan change if you use a hard-coded timestamp in your query?", "msg_date": "Wed, 2 Sep 2009 08:16:28 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "Check the caveats at\nhttp://www.postgresql.org/docs/current/static/ddl-partitioning.html\n\n\"Constraint exclusion only works when the query's WHERE clause contains\nconstants. A parameterized query will not be optimized, since the planner\ncannot know which partitions the parameter value might select at run time.\nFor the same reason, \"stable\" functions such as CURRENT_DATE must be\navoided.\"\n\nI think this applies to both your query and the CHECK statement in the table\ndefinition.\n\n-Greg Jaman\n\n\n\nOn Wed, Sep 2, 2009 at 8:05 AM, Kevin Kempter <[email protected]>wrote:\n\n> On Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:\n> > On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter<[email protected]\n> >\n> wrote:\n> > > Hi all;\n> > >\n> > > I cant figure out why we're scanning all of our partitions.\n> > >\n> > > We setup our tables like this:\n> > >\n> > >\n> > > Base Table:\n> > >\n> > > CREATE TABLE url_hits (\n> > > id integer NOT NULL,\n> > > content_type_id integer,\n> > > file_extension_id integer,\n> > > \"time\" integer,\n> > > bytes integer NOT NULL,\n> > > path_id integer,\n> > > protocol public.protocol_enum\n> > > );\n> > >\n> > > Partitions:\n> > > create table url_hits_2011_12 (\n> > > check (\n> > > \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > > 00:00:00')::int4\n> > > and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> > > 23:59:59')::int4\n> > > )\n> > > ) INHERITS (url_hits);\n> > >\n> > >\n> > > CREATE RULE url_hits_2011_12_insert as\n> > > ON INSERT TO url_hits\n> > > where\n> > > ( \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > > '2011-12-31\n> > > 23:59:59')::int4 )\n> > > DO INSTEAD\n> > > INSERT INTO url_hits_2011_12 VALUES (NEW.*) ;\n> > >\n> > > ...\n> > >\n> > > create table url_hits_2009_08 (\n> > > check (\n> > > \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > > 00:00:00')::int4\n> > > and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> > > 23:59:59')::int4\n> > > )\n> > > ) INHERITS (url_hits);\n> > >\n> > >\n> > > CREATE RULE url_hits_2009_08_insert as\n> > > ON INSERT TO url_hits\n> > > where\n> > > ( \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > > '2009-08-31\n> > > 23:59:59')::int4 )\n> > > DO INSTEAD\n> > > INSERT INTO url_hits_2009_08 VALUES (NEW.*) ;\n> > >\n> > > ...\n> > >\n> > > the explain plan shows most any query scans/hits all partitions even if\n> > > we specify the partition key:\n> > >\n> > > explain select * from pwreport.url_hits where \"time\" >\n> > > date_part('epoch'::text, '2009-08-12'::timestamp without time\n> > > zone)::integer;\n> >\n> > Have you tried using extract here instead of date_part ?\n>\n>\n> Yes, same results:\n>\n> explain select * from pwreport.url_hits where \"time\" > extract('epoch' from\n> timestamp '2009-08-12 00:00:00')::int4;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..23785180.16 rows=817269615 width=432)\n> -> Append (cost=0.00..23785180.16 rows=817269615 width=432)\n> -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_11 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_10 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_09 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_08 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_07 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_06 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_05 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_04 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_03 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_02 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2011_01 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_12 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_11 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_10 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_09 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_08 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_07 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_06 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_05 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_04 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_03 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_02 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2010_01 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2009_12 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2009_11 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2009_10 url_hits (cost=0.00..12.12\n> rows=57\n> width=432)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2009_09 url_hits (cost=0.00..1856896.86\n> rows=76384671 width=128)\n> Filter: (\"time\" > 1250035200)\n> -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..21927943.80\n> rows=740883348 width=131)\n> Filter: (\"time\" > 1250035200)\n> (62 rows)\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nCheck the caveats at http://www.postgresql.org/docs/current/static/ddl-partitioning.html\"Constraint exclusion only works when the query's WHERE clause contains constants. A parameterized query will not be optimized, since the planner cannot know which partitions the parameter value might select at run time. For the same reason, \"stable\" functions such as CURRENT_DATE must be avoided.\"\nI think this applies to both your query and the CHECK statement in the table definition.-Greg JamanOn Wed, Sep 2, 2009 at 8:05 AM, Kevin Kempter <[email protected]> wrote:\nOn Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:\n\n> On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter<[email protected]>\nwrote:\n> > Hi all;\n> >\n> > I cant figure out why we're scanning all of our partitions.\n> >\n> > We setup our tables like this:\n> >\n> >\n> > Base Table:\n> >\n> > CREATE TABLE url_hits (\n> >    id integer NOT NULL,\n> >    content_type_id integer,\n> >    file_extension_id integer,\n> >    \"time\" integer,\n> >    bytes integer NOT NULL,\n> >    path_id integer,\n> >    protocol public.protocol_enum\n> > );\n> >\n> > Partitions:\n> > create table url_hits_2011_12 (\n> >   check (\n> >          \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > 00:00:00')::int4\n> >          and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> > 23:59:59')::int4\n> >   )\n> > ) INHERITS (url_hits);\n> >\n> >\n> > CREATE RULE url_hits_2011_12_insert as\n> > ON INSERT TO url_hits\n> > where\n> >   ( \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > '2011-12-31\n> > 23:59:59')::int4 )\n> > DO INSTEAD\n> >  INSERT INTO  url_hits_2011_12 VALUES (NEW.*) ;\n> >\n> > ...\n> >\n> > create table url_hits_2009_08 (\n> >   check (\n> >          \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > 00:00:00')::int4\n> >          and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> > 23:59:59')::int4\n> >   )\n> > ) INHERITS (url_hits);\n> >\n> >\n> > CREATE RULE url_hits_2009_08_insert as\n> > ON INSERT TO url_hits\n> > where\n> >   ( \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> > 00:00:00')::int4 and \"time\" <= extract ('epoch' from timestamp\n> > '2009-08-31\n> > 23:59:59')::int4 )\n> > DO INSTEAD\n> >  INSERT INTO  url_hits_2009_08 VALUES (NEW.*) ;\n> >\n> > ...\n> >\n> > the explain plan shows most any query scans/hits all partitions even if\n> > we specify the partition key:\n> >\n> > explain select * from pwreport.url_hits where \"time\" >\n> > date_part('epoch'::text, '2009-08-12'::timestamp without time\n> > zone)::integer;\n>\n> Have you tried using extract here instead of date_part ?\n\n\nYes, same results:\n\nexplain select * from pwreport.url_hits where \"time\" > extract('epoch' from\ntimestamp '2009-08-12 00:00:00')::int4;\n                                              QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Result  (cost=0.00..23785180.16 rows=817269615 width=432)\n   ->  Append  (cost=0.00..23785180.16 rows=817269615 width=432)\n         ->  Seq Scan on url_hits  (cost=0.00..12.12 rows=57 width=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_12 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_11 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_10 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_09 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_08 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_07 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_06 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_05 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_04 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_03 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_02 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2011_01 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_12 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_11 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_10 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_09 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_08 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_07 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_06 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_05 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_04 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_03 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_02 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2010_01 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2009_12 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2009_11 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2009_10 url_hits  (cost=0.00..12.12 rows=57\nwidth=432)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2009_09 url_hits  (cost=0.00..1856896.86\nrows=76384671 width=128)\n               Filter: (\"time\" > 1250035200)\n         ->  Seq Scan on url_hits_2009_08 url_hits  (cost=0.00..21927943.80\nrows=740883348 width=131)\n               Filter: (\"time\" > 1250035200)\n(62 rows)\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 2 Sep 2009 08:17:27 -0700", "msg_from": "Greg Jaman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "Kevin Kempter <[email protected]> writes:\n> I cant figure out why we're scanning all of our partitions.\n\nThe example works as expected for me:\n\nregression=# CREATE TABLE url_hits (\n id integer NOT NULL,\n content_type_id integer,\n file_extension_id integer,\n \"time\" integer,\n bytes integer NOT NULL,\n path_id integer);\nCREATE TABLE\nregression=# create table url_hits_2011_12 (\n check ( \n \"time\" >= extract ('epoch' from timestamp '2011-12-01 \n00:00:00')::int4\n and \"time\" <= extract ('epoch' from timestamp '2011-12-31 \n23:59:59')::int4\n )\n) INHERITS (url_hits);\nCREATE TABLE\nregression=# create table url_hits_2009_08 (\n check ( \n \"time\" >= extract ('epoch' from timestamp '2009-08-01 \n00:00:00')::int4\n and \"time\" <= extract ('epoch' from timestamp '2009-08-31 \n23:59:59')::int4\n )\n) INHERITS (url_hits);\nCREATE TABLE\nregression=# explain select * from url_hits where \"time\" < \ndate_part('epoch'::text, '2009-08-12'::timestamp without time zone)::integer; \n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Result (cost=0.00..82.50 rows=1401 width=24)\n -> Append (cost=0.00..82.50 rows=1401 width=24)\n -> Seq Scan on url_hits (cost=0.00..27.50 rows=467 width=24)\n Filter: (\"time\" < 1250049600)\n -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..27.50 rows=467 width=24)\n Filter: (\"time\" < 1250049600)\n -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..27.50 rows=467 width=24)\n Filter: (\"time\" < 1250049600)\n(8 rows)\n\nregression=# set constraint_exclusion TO 1;\nSET\nregression=# explain select * from url_hits where \"time\" < \ndate_part('epoch'::text, '2009-08-12'::timestamp without time zone)::integer; \n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Result (cost=0.00..55.00 rows=934 width=24)\n -> Append (cost=0.00..55.00 rows=934 width=24)\n -> Seq Scan on url_hits (cost=0.00..27.50 rows=467 width=24)\n Filter: (\"time\" < 1250049600)\n -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..27.50 rows=467 width=24)\n Filter: (\"time\" < 1250049600)\n(6 rows)\n\n\nYou sure you remembered those fiddly little casts everywhere?\n(Frankly, declaring \"time\" as integer and not timestamp here strikes\nme as utter lunacy.) What PG version are you using?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Sep 2009 11:19:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "On Wed, Sep 2, 2009 at 4:05 PM, Kevin Kempter<[email protected]> wrote:\n> explain select * from pwreport.url_hits where \"time\" > extract('epoch' from\n> timestamp '2009-08-12 00:00:00')::int4;\n>\n\nHm. Actually I would have thought this would work. You're using\n\"timestamp\" which defaults to without timezone and\ndate_part(text,timestamp) is marked immutable. So the condition in the\nwhree clause is being inlined at plan time so it's just a simple\ncomparison against an integer. That does appear to be successfully\nhappening.\n\nI think what's happening is that the constraints are not being inlined\nand the planner is not inlining them before comparing them to the\nwhere clause. I wonder if this worked in the past or not.\n\nYou could make things work by defining your constraints to use the\ninteger results of those expressions explicitly. You could even do\nwrite a simple perl script (or insert favourite scripting language) to\ngenerate the constraint definitions from timestamps if you wanted.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 2 Sep 2009 16:22:19 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "On Wednesday 02 September 2009 09:19:20 Tom Lane wrote:\n> Kevin Kempter <[email protected]> writes:\n> > I cant figure out why we're scanning all of our partitions.\n>\n> The example works as expected for me:\n>\n> regression=# CREATE TABLE url_hits (\n> id integer NOT NULL,\n> content_type_id integer,\n> file_extension_id integer,\n> \"time\" integer,\n> bytes integer NOT NULL,\n> path_id integer);\n> CREATE TABLE\n> regression=# create table url_hits_2011_12 (\n> check (\n> \"time\" >= extract ('epoch' from timestamp '2011-12-01\n> 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2011-12-31\n> 23:59:59')::int4\n> )\n> ) INHERITS (url_hits);\n> CREATE TABLE\n> regression=# create table url_hits_2009_08 (\n> check (\n> \"time\" >= extract ('epoch' from timestamp '2009-08-01\n> 00:00:00')::int4\n> and \"time\" <= extract ('epoch' from timestamp '2009-08-31\n> 23:59:59')::int4\n> )\n> ) INHERITS (url_hits);\n> CREATE TABLE\n> regression=# explain select * from url_hits where \"time\" <\n> date_part('epoch'::text, '2009-08-12'::timestamp without time\n> zone)::integer; QUERY PLAN\n> ---------------------------------------------------------------------------\n>-------------- Result (cost=0.00..82.50 rows=1401 width=24)\n> -> Append (cost=0.00..82.50 rows=1401 width=24)\n> -> Seq Scan on url_hits (cost=0.00..27.50 rows=467 width=24)\n> Filter: (\"time\" < 1250049600)\n> -> Seq Scan on url_hits_2011_12 url_hits (cost=0.00..27.50\n> rows=467 width=24) Filter: (\"time\" < 1250049600)\n> -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..27.50\n> rows=467 width=24) Filter: (\"time\" < 1250049600)\n> (8 rows)\n>\n> regression=# set constraint_exclusion TO 1;\n> SET\n> regression=# explain select * from url_hits where \"time\" <\n> date_part('epoch'::text, '2009-08-12'::timestamp without time\n> zone)::integer; QUERY PLAN\n> ---------------------------------------------------------------------------\n>-------------- Result (cost=0.00..55.00 rows=934 width=24)\n> -> Append (cost=0.00..55.00 rows=934 width=24)\n> -> Seq Scan on url_hits (cost=0.00..27.50 rows=467 width=24)\n> Filter: (\"time\" < 1250049600)\n> -> Seq Scan on url_hits_2009_08 url_hits (cost=0.00..27.50\n> rows=467 width=24) Filter: (\"time\" < 1250049600)\n> (6 rows)\n>\n>\n> You sure you remembered those fiddly little casts everywhere?\n> (Frankly, declaring \"time\" as integer and not timestamp here strikes\n> me as utter lunacy.) What PG version are you using?\n>\n> \t\t\tregards, tom lane\n\n\nI actually inherited the whole \"time\" scenario - agreed, its crazy.\n\nIn any case I ran the exact same query as you and it still scans most (but not \nall) partitions. Were on version \n\n \npwreport=# set constraint_exclusion TO 1;SET \npwreport=# \nexplain select * from pwreport.url_hits where \"time\" < \ndate_part('epoch'::text, '2009-08-12'::timestamp without time zone)::integer; \n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..9677473.91 rows=148258840 width=432)\n -> Append (cost=0.00..9677473.91 rows=148258840 width=432)\n -> Seq Scan on url_hits (cost=0.00..12.12 rows=57 width=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_07 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_06 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_05 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_04 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_03 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_02 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2009_01 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2008_12 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2008_11 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2008_10 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Seq Scan on url_hits_2008_09 url_hits (cost=0.00..12.12 rows=57 \nwidth=432)\n Filter: (\"time\" < 1250035200)\n -> Index Scan using url_hits_2009_08_time_index on url_hits_2009_08 \nurl_hits (cost=0.00..9677328.41 rows=148258156 width=131)\n Index Cond: (\"time\" < 1250035200)\n(28 rows)\n\n> id integer NOT NULL,\n> content_type_id integer,\n> file_extension_id integer,\n> \"time\" integer,\n> bytes integer NOT NULL,\n> path_id integer);\n\n\nAlso, we do have indexes on the child table, will this change things?\n\n\\d url_hits_2009_08 \n Table \"url_hits_2009_08\" \n Column | Type | \nModifiers \n-------------------+-----------------------+---------------------------------------------------------------- \n id | integer | not null default \nnextval('url_hits_id_seq'::regclass)\n direction | proxy_direction_enum | not null\n content_type_id | integer |\n file_extension_id | integer |\n time | integer |\n bytes | integer | not null\n path_id | integer |\nIndexes:\n \"url_hits_2009_08_pk\" PRIMARY KEY, btree (id)\n \"url_hits_2009_08_time_index\" btree (\"time\")\nCheck constraints:\n \"url_hits_2009_08_time_check\" CHECK (\"time\" >= date_part('epoch'::text, \n'2009-08-01 00:00:00'::timestamp without time zone)::integer AND \"time\" <= \ndate_part('epoch'::text, '2009-08-31 23:59:59'::timestamp without time \nzone)::integer)\nInherits: url_hits\nTablespace: \"pwreport_1000\"\n", "msg_date": "Wed, 2 Sep 2009 09:39:02 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "On Wed, 2009-09-02 at 09:39 -0600, Kevin Kempter wrote:\n\n> >\n> > You sure you remembered those fiddly little casts everywhere?\n> > (Frankly, declaring \"time\" as integer and not timestamp here strikes\n> > me as utter lunacy.) What PG version are you using?\n> >\n> > \t\t\tregards, tom lane\n> \n\nAs far as I know constraint exclusion doesn't work with date_part or\nextract().\n\nThe following caveats apply to constraint exclusion: \n\n * Constraint exclusion only works when the query's WHERE clause\n contains constants. A parameterized query will not be optimized,\n since the planner cannot know which partitions the parameter\n value might select at run time. For the same reason, \"stable\"\n functions such as CURRENT_DATE must be avoided. \n \nhttp://www.postgresql.org/docs/8.3/static/ddl-partitioning.html\n\nOr did I miss something?\n\nJoshua D. Drake\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\n\n\n", "msg_date": "Wed, 02 Sep 2009 08:59:01 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> As far as I know constraint exclusion doesn't work with date_part or\n> extract().\n\nUh, you clipped the example in my message showing that it does,\nat least in the particular case Kevin showed us.\n\nThere are some variants of date_part that aren't immutable, but timestamp\nwithout tz isn't one of them.\n\nStill, I agree that not depending on it would be better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Sep 2009 13:23:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "Kevin Kempter <[email protected]> writes:\n> In any case I ran the exact same query as you and it still scans most (but not \n> all) partitions.\n\nAFAICT it's scanning the right partitions in this example. What's\ndifferent in the case where it scans all?\n\n> Were on version \n\nThis seems to have got truncated ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Sep 2009 14:22:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though check key is\n\tspecified" }, { "msg_contents": "\n\nOn 9/2/09 8:59 AM, \"Joshua D. Drake\" <[email protected]> wrote:\n\n> On Wed, 2009-09-02 at 09:39 -0600, Kevin Kempter wrote:\n> \n>>> \n>>> You sure you remembered those fiddly little casts everywhere?\n>>> (Frankly, declaring \"time\" as integer and not timestamp here strikes\n>>> me as utter lunacy.) What PG version are you using?\n>>> \n>>> regards, tom lane\n>> \n> \n> As far as I know constraint exclusion doesn't work with date_part or\n> extract().\n> \n> The following caveats apply to constraint exclusion:\n> \n> * Constraint exclusion only works when the query's WHERE clause\n> contains constants. A parameterized query will not be optimized,\n> since the planner cannot know which partitions the parameter\n> value might select at run time. For the same reason, \"stable\"\n> functions such as CURRENT_DATE must be avoided.\n> \n> http://www.postgresql.org/docs/8.3/static/ddl-partitioning.html\n> \n> Or did I miss something?\n\nI've only ever seen it work for constants. Partitioning by date works fine\nas far as I know no matter how you set the constraint rule up (functions are\nfine here, but slower). But the query itself has to submit a constant in\nthe WHERE clause. Prepared statements and parameterization on the query\nwon't work either.\nFor dates, literals like 'yesterday' work, but function equivalents don't.\nBasically if the planner interprets the where condition on the column as a\nconstant (even if resolving that constant calls a function, such as\n'yesterday') it will work. Otherwise, it won't.\n\n\n> \n> Joshua D. Drake\n> \n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\n> Consulting, Training, Support, Custom Development, Engineering\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 2 Sep 2009 11:28:51 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "On 9/2/09 10:05 AM, Kevin Kempter wrote:\n> On Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:\n>> On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter<[email protected]> \n> wrote:\n>>> Hi all;\n>>>\n>>> I cant figure out why we're scanning all of our partitions.\n\nI don't think extract() is immutable, which would pretty much invalidate\nyour check constraints as far as CE is concerned.\n\nI suggest feeding the actual numeric values to the check constraints.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 12 Jul 2010 22:01:08 -0500", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" }, { "msg_contents": "On Mon, 2010-07-12 at 22:01 -0500, Josh Berkus wrote:\n> On 9/2/09 10:05 AM, Kevin Kempter wrote:\n> > On Wednesday 02 September 2009 09:02:27 Scott Marlowe wrote:\n> >> On Wed, Sep 2, 2009 at 8:52 AM, Kevin Kempter<[email protected]> \n> > wrote:\n> >>> Hi all;\n> >>>\n> >>> I cant figure out why we're scanning all of our partitions.\n> \n> I don't think extract() is immutable, which would pretty much invalidate\n> your check constraints as far as CE is concerned.\n\nCorrect.\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\n\n", "msg_date": "Mon, 12 Jul 2010 21:00:48 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition queries hitting all partitions even though\n\tcheck key is specified" } ]
[ { "msg_contents": "\nI've got a table set up with an XML field that I would like to search on with\n2.5 million records. The xml are serialized objects from my application\nwhich are too complex to break out into separate tables. I'm trying to run a\nquery similar to this:\n\n\tSELECT serialized_object as outVal\n\t from object where\n\t(\n\tarray_to_string(xpath('/a:root/a:Identification/b:ObjectId/text()',\nserialized_object, \n ARRAY\n [\n ARRAY['a', 'http://schemas.datacontract.org/2004/07/Objects'],\n ARRAY['b', 'http://schemas.datacontract.org/2004/07/Security']\n \n ]), ' ') = 'fdc3da1f-060f-4c34-9c30-d9334d9272ae'\n\n\t)\n\tlimit 1000;\n\nI've also set up an index on the xpath query like this...\n\nCREATE INDEX concurrently\nidx_object_nodeid\nON\nobject\nUSING\nbtree(\n\n cast(xpath('/a:root/a:Identification/b:ObjectId/text()', serialized_object, \n ARRAY\n [\n ARRAY['a', 'http://schemas.datacontract.org/2004/07/Objects'],\n ARRAY['b', 'http://schemas.datacontract.org/2004/07/Security']\n \n ])as text[])\n);\n\nThe query takes around 30 minutes to complete with or without the index in\nplace and does not cache the query. Additionally the EXPLAIN say that the\nindex is not being used. I've looked everywhere but can't seem to find solid\ninfo on how to achieve this. Any ideas would be greatly appreciated.\n-- \nView this message in context: http://www.nabble.com/Slow-select-times-on-select-with-xpath-tp25259351p25259351.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 2 Sep 2009 08:04:04 -0700 (PDT)", "msg_from": "astro77 <[email protected]>", "msg_from_op": true, "msg_subject": "Slow select times on select with xpath" }, { "msg_contents": "astro77 <[email protected]> wrote:\n \n> I've got a table set up with an XML field that I would like to search\non \n> with\n> 2.5 million records. The xml are serialized objects from my\napplication\n> which are too complex to break out into separate tables. I'm trying\nto run a\n> query similar to this:\n> \n> \tSELECT serialized_object as outVal\n> \t from object where\n> \t(\n>\n\tarray_to_string(xpath('/a:root/a:Identification/b:ObjectId/text()',\n> serialized_object, \n> ARRAY\n> [\n> ARRAY['a',\n'http://schemas.datacontract.org/2004/07/Objects'],\n> ARRAY['b',\n'http://schemas.datacontract.org/2004/07/Security']\n> \n> ]), ' ') = 'fdc3da1f-060f-4c34-9c30-d9334d9272ae'\n> \n> \t)\n> \tlimit 1000;\n \nI would try to minimize how many XML values it had to read, parse, and\nsearch. The best approach that comes to mind would be to use tsearch2\ntechniques (with a GIN or GiST index on the tsvector) to identify\nwhich rows contain 'fdc3da1f-060f-4c34-9c30-d9334d9272ae', and use AND\nto combine that with your xpath search.\n \n-Kevin\n", "msg_date": "Thu, 03 Sep 2009 10:27:50 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "On Wed, Sep 2, 2009 at 11:04 AM, astro77<[email protected]> wrote:\n>\n> I've got a table set up with an XML field that I would like to search on with\n> 2.5 million records. The xml are serialized objects from my application\n> which are too complex to break out into separate tables. I'm trying to run a\n> query similar to this:\n>\n>        SELECT  serialized_object as outVal\n>         from object  where\n>        (\n>        array_to_string(xpath('/a:root/a:Identification/b:ObjectId/text()',\n> serialized_object,\n>             ARRAY\n>             [\n>             ARRAY['a', 'http://schemas.datacontract.org/2004/07/Objects'],\n>             ARRAY['b', 'http://schemas.datacontract.org/2004/07/Security']\n>\n>             ]), ' ') = 'fdc3da1f-060f-4c34-9c30-d9334d9272ae'\n>\n>        )\n>        limit 1000;\n>\n> I've also set up an index on the xpath query like this...\n>\n> CREATE INDEX concurrently\n> idx_object_nodeid\n> ON\n> object\n> USING\n> btree(\n>\n>  cast(xpath('/a:root/a:Identification/b:ObjectId/text()', serialized_object,\n>             ARRAY\n>             [\n>             ARRAY['a', 'http://schemas.datacontract.org/2004/07/Objects'],\n>             ARRAY['b', 'http://schemas.datacontract.org/2004/07/Security']\n>\n>             ])as text[])\n> );\n>\n> The query takes around 30 minutes to complete with or without the index in\n> place and does not cache the query. Additionally the EXPLAIN say that the\n> index is not being used. I've looked everywhere but can't seem to find solid\n> info on how to achieve this. Any ideas would be greatly appreciated.\n\nWhy do you have a cast in the index definition?\n\n...Robert\n", "msg_date": "Thu, 3 Sep 2009 14:25:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "\nI was receiving an error that an XML field does not support the various\nindexes available in postgresql. Is there an example of how to do this\nproperly?\n\n\nRobert Haas wrote:\n> \n> On Wed, Sep 2, 2009 at 11:04 AM, astro77<[email protected]> wrote:\n>>\n>> I've got a table set up with an XML field that I would like to search on\n>> with\n>> 2.5 million records. The xml are serialized objects from my application\n>> which are too complex to break out into separate tables. I'm trying to\n>> run a\n>> query similar to this:\n>>\n>>        SELECT  serialized_object as outVal\n>>         from object  where\n>>        (\n>>      \n>>  array_to_string(xpath('/a:root/a:Identification/b:ObjectId/text()',\n>> serialized_object,\n>>             ARRAY\n>>             [\n>>             ARRAY['a',\n>> 'http://schemas.datacontract.org/2004/07/Objects'],\n>>             ARRAY['b',\n>> 'http://schemas.datacontract.org/2004/07/Security']\n>>\n>>             ]), ' ') = 'fdc3da1f-060f-4c34-9c30-d9334d9272ae'\n>>\n>>        )\n>>        limit 1000;\n>>\n>> I've also set up an index on the xpath query like this...\n>>\n>> CREATE INDEX concurrently\n>> idx_object_nodeid\n>> ON\n>> object\n>> USING\n>> btree(\n>>\n>>  cast(xpath('/a:root/a:Identification/b:ObjectId/text()',\n>> serialized_object,\n>>             ARRAY\n>>             [\n>>             ARRAY['a',\n>> 'http://schemas.datacontract.org/2004/07/Objects'],\n>>             ARRAY['b',\n>> 'http://schemas.datacontract.org/2004/07/Security']\n>>\n>>             ])as text[])\n>> );\n>>\n>> The query takes around 30 minutes to complete with or without the index\n>> in\n>> place and does not cache the query. Additionally the EXPLAIN say that the\n>> index is not being used. I've looked everywhere but can't seem to find\n>> solid\n>> info on how to achieve this. Any ideas would be greatly appreciated.\n> \n> Why do you have a cast in the index definition?\n> \n> ...Robert\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Slow-select-times-on-select-with-xpath-tp25259351p25283175.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 3 Sep 2009 13:06:26 -0700 (PDT)", "msg_from": "astro77 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "On Thu, Sep 3, 2009 at 4:06 PM, astro77<[email protected]> wrote:\n> I was receiving an error that an XML field does not support the various\n> indexes available in postgresql.\n\nPlease post what happens when you try.\n\n> Is there an example of how to do this\n> properly?\n\nNot sure.\n\n...Robert\n", "msg_date": "Thu, 3 Sep 2009 17:19:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "\nCREATE INDEX CONCURRENTLY idx_serializedxml\n ON \"object\" (serialized_object ASC NULLS LAST);\n\nyields the error:\nERROR: data type xml has no default operator class for access method \"btree\"\n\nThe same error occurs when I try to use the other access methods as well.\n\n\nOn Thu, Sep 3, 2009 at 4:06 PM, astro77<[email protected]> wrote:\n> I was receiving an error that an XML field does not support the various\n> indexes available in postgresql.\n\nPlease post what happens when you try.\n\n\n-- \nView this message in context: http://www.nabble.com/Slow-select-times-on-select-with-xpath-tp25259351p25530433.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Mon, 21 Sep 2009 12:02:06 -0700 (PDT)", "msg_from": "astro77 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "\nThanks Kevin. I thought about using tsearch2 but I need to be able to select\nexact values on other numerical queries and cannot use \"contains\" queries.\nIt's got to be fast so I cannot have lots of records returned and have to do\nsecondary processing on the xml for the records which contain the exact\nvalue I'm looking for. This is one of the reasons I moved from using Lucene\nfor searching. I hope this makes sense.\n\n\nKevin Grittner wrote:\n> wrote:\n> \n> \n> I would try to minimize how many XML values it had to read, parse, and\n> search. The best approach that comes to mind would be to use tsearch2\n> techniques (with a GIN or GiST index on the tsvector) to identify\n> which rows contain 'fdc3da1f-060f-4c34-9c30-d9334d9272ae', and use AND\n> to combine that with your xpath search.\n> \n> -Kevin\n> \n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Slow-select-times-on-select-with-xpath-tp25259351p25530439.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Mon, 21 Sep 2009 12:13:29 -0700 (PDT)", "msg_from": "astro77 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "\nAs a follow-up, when I try to create the index like this...\n\nCREATE INDEX concurrently\nidx_object_nodeid2\n ON\n object\n USING\n btree(\n xpath('/a:root/a:Identification/b:ObjectId/text()', serialized_object,\n ARRAY\n [\n ARRAY['a', 'http://schemas.datacontract.org/2004/07/Objects'],\n ARRAY['b', 'http://schemas.datacontract.org/2004/07/Security']\n ])\n ) ; \n\nThe index begins to build but fails after about 90 seconds with this error:\n\nERROR: could not identify a comparison function for type xml\nSQL state: 42883\n\n\n\nRobert Haas wrote:\n> \n> On Thu, Sep 3, 2009 at 4:06 PM, astro77<[email protected]> wrote:\n>> I was receiving an error that an XML field does not support the various\n>> indexes available in postgresql.\n> \n> Please post what happens when you try.\n> \n>> Is there an example of how to do this\n>> properly?\n> \n> Not sure.\n> \n> ...Robert\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Slow-select-times-on-select-with-xpath-tp25259351p25530455.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Mon, 21 Sep 2009 12:51:56 -0700 (PDT)", "msg_from": "astro77 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "astro77 wrote:\n> Thanks Kevin. I thought about using tsearch2 but I need to be able to select\n> exact values on other numerical queries and cannot use \"contains\" queries.\n\nYou might be able to make use of a custom parser for tsearch2 that creates\nsomething like a single \"word\" for xml fragments like <whatever>1</whatever>\nwhich would let you quickly find exact matches for those words/phrases.\n\n> It's got to be fast so I cannot have lots of records returned and have to do\n> secondary processing on the xml for the records which contain the exact\n> value I'm looking for. This is one of the reasons I moved from using Lucene\n> for searching. I hope this makes sense.\n> \n> \n> Kevin Grittner wrote:\n>> wrote:\n>> \n>> \n>> I would try to minimize how many XML values it had to read, parse, and\n>> search. The best approach that comes to mind would be to use tsearch2\n>> techniques (with a GIN or GiST index on the tsvector) to identify\n>> which rows contain 'fdc3da1f-060f-4c34-9c30-d9334d9272ae', and use AND\n>> to combine that with your xpath search.\n>> \n>> -Kevin\n>>\n>>\n>>\n> \n\n", "msg_date": "Mon, 21 Sep 2009 22:12:30 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select times on select with xpath" }, { "msg_contents": "astro77 <[email protected]> writes:\n> Kevin Grittner wrote:\n>> I would try to minimize how many XML values it had to read, parse, and\n>> search. The best approach that comes to mind would be to use tsearch2\n>> techniques (with a GIN or GiST index on the tsvector) to identify\n>> which rows contain 'fdc3da1f-060f-4c34-9c30-d9334d9272ae', and use AND\n>> to combine that with your xpath search.\n>\n> Thanks Kevin. I thought about using tsearch2 but I need to be able to select\n> exact values on other numerical queries and cannot use \"contains\" queries.\n> It's got to be fast so I cannot have lots of records returned and have to do\n> secondary processing on the xml for the records which contain the exact\n> value I'm looking for. This is one of the reasons I moved from using Lucene\n> for searching. I hope this makes sense.\n\nI think he meant something following this skeleton:\n\n SELECT ...\n FROM ( SELECT ... \n FROM ...\n WHERE /* insert preliminary filtering here */\n )\n\n WHERE /* insert xpath related filtering here */\n\nHopefully you have a preliminary filtering available that's restrictive\nenough for the xpath filtering to only have to check few rows. Kevin\nproposes that this preliminary filtering be based on Tsearch with an\nadequate index (GiST for data changing a lot, GIN for pretty static\nset).\n\nAs you can see the two-steps filtering can be done in a single SQL query.\n\nRegards,\n-- \ndim\n", "msg_date": "Tue, 22 Sep 2009 13:35:51 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow select times on select with xpath" } ]
[ { "msg_contents": "Hello,\nI'm using postgresql 8.1.5. Sorry if this is not the right area to ask this. I already have command string turned on at the postgresql.conf , and am currently trying to troubleshoot some connection problem at a server that is causing performance issues. Apart from \"<IDLE>\" and the specific SQL commands that the clients have issued, I'm seeing two things that I cannot seem to explain:\nOne is '<IDLE in transaction>' and the other is simply the word 'end'. \nI googled everywhere about this but to not avail. If you could shed some light on this subject that would be great!\nThank you in advance.\n_________________________________________________________________\nClick less, chat more: Messenger on MSN.ca\nhttp://go.microsoft.com/?linkid=9677404\n\n\n\n\n\nHello,I'm using postgresql 8.1.5. Sorry if this is not the right area to ask this. I already have command string turned on at the postgresql.conf , and am currently trying to troubleshoot some connection problem at a server that is causing performance issues. Apart from \"<IDLE>\" and the specific SQL commands that the clients have issued, I'm seeing two things that I cannot seem to explain:One is '<IDLE in transaction>' and the other is simply the word 'end'. I googled everywhere about this but to not avail. If you could shed some light on this subject that would be great!Thank you in advance.Faster Hotmail access now on the new MSN homepage.", "msg_date": "Wed, 2 Sep 2009 11:29:14 -0400", "msg_from": "Pat Chan <[email protected]>", "msg_from_op": true, "msg_subject": "pg_stat_activity.current_query explanation?" }, { "msg_contents": "On Wed, Sep 02, 2009 at 11:29:14AM -0400, Pat Chan wrote:\n> One is '<IDLE in transaction>' and the other is simply the word 'end'. \n> I googled everywhere about this but to not avail. If you could shed some\n> light on this subject that would be great!\n> Thank you in advance.\n\n'<IDLE in transaction>' means that the client has opened a transaction but\nisn't doing anything right now. If you issue a \"BEGIN;\" command and then just\nsit there, for instance, you'll see these.\n\n'END' is synonymous with 'COMMIT', so where those show up, it means the client\nis in the middle of committing a transaction.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Thu, 3 Sep 2009 07:05:46 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_activity.current_query explanation?" } ]
[ { "msg_contents": "[Sorry if you receive multiple copies of this message.] \n[Please feel free to forward the message to others who may be\ninterested.] \n\nHi, \n\nWe are a computer systems research group at the Computer Science\ndepartment at Rutgers University, and are conducting research on\nsimplifying the software configuration process. The idea is to \nleverage the configurations of existing users of a piece of software to\nease the configuration process for each new user of the software. \n\nThe reason for this message is that we would like to collect a large\nnumber of deployed configurations to help evaluate our ideas. Thus, we\nask systems administrators and end users to submit information about\ntheir configurations for any software that they have had to configure,\nsuch as Apache, MySQL, and Linux. \n\nWe hope that you have a few minutes to take our survey which is located\nat: http://vivo.cs.rutgers.edu/massconf/MassConf.html As an incentive,\nall surveys completed in their entirety will be entered into a drawing\nof a number of $50 gift certificates (from Amazon.com). \n\nImportant: Our work is purely scientific, so we have no interest in any\nprivate or commercially sensitive information that may come along with\nyour configuration data. We will make sure that no such information is\never made public. In fact, if you wish, you are more than welcome to\nanonymize or remove any sensitive information from the configuration\ndata you send us. \n\nIf you have any questions regarding this message or our work, feel free\nto email Wei Zheng (wzheng at cs dot rutgers dot edu). \n\n\nThanks for your time, \n\nWei Zheng \nPhD student, Vivo Research Group (http://vivo.cs.rutgers.edu) \nRutgers University \n\n\n", "msg_date": "Wed, 02 Sep 2009 11:39:25 -0400", "msg_from": "Wei Zheng <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for real configuration data" } ]
[ { "msg_contents": "With postgresql-8.3.6, I have many partitions inheriting a table. SELECT \nmin() on the parent performs a Seq Scan, but SELECT min() on a child uses \nthe index. Is this another case where the planner is not aware enough to \ncome up with the best plan? I tried creating an index on the parent table \nto no avail. Is there a way to formulate the query so that it uses the \nindex? Here is the general flavor:\n\ncreate table calls (caller text, ts timestamptz);\ncreate table calls_partition_2009_08 (check (ts >= '2009-08-01' and ts < \n'2009-09-01')) inherits (calls);\ncreate index calls_partition_2009_08_ts on calls_partition_2009_08 (ts);\ninsert into calls_partition_2009_08 (ts)\n select to_timestamp(unix_time)\n from generate_series(extract(epoch from \n'2009-08-01'::timestamptz)::int,\n extract(epoch from '2009-08-31 \n23:59'::timestamptz)::int, 60) as unix_time;\nanalyze calls_partition_2009_08;\nexplain select min(ts) from calls;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Aggregate (cost=780.50..780.51 rows=1 width=8)\n -> Append (cost=0.00..666.00 rows=45800 width=8)\n -> Seq Scan on calls (cost=0.00..21.60 rows=1160 width=8)\n -> Seq Scan on calls_partition_2009_08 calls (cost=0.00..644.40 \nrows=44640 width=8)\n(4 rows)\n\nexplain select min(ts) from calls_partition_2009_08;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.03..0.04 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.03 rows=1 width=8)\n -> Index Scan using calls_partition_2009_08_ts on \ncalls_partition_2009_08 (cost=0.00..1366.85 rows=44640 width=8)\n Filter: (ts IS NOT NULL)\n(5 rows)\n", "msg_date": "Wed, 02 Sep 2009 16:15:34 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": true, "msg_subject": "partition query using Seq Scan even when index is present" }, { "msg_contents": "Yep.... I ran into the exact same problem.\nMy solution was to create a pl/pgsql function to query the child tables: (\nhttp://archives.postgresql.org/pgsql-performance/2008-11/msg00284.php)\nIf you find a better solution please share.\n\n-Greg Jaman\n\nOn Wed, Sep 2, 2009 at 1:15 PM, Kenneth Cox <[email protected]> wrote:\n\n> With postgresql-8.3.6, I have many partitions inheriting a table. SELECT\n> min() on the parent performs a Seq Scan, but SELECT min() on a child uses\n> the index. Is this another case where the planner is not aware enough to\n> come up with the best plan? I tried creating an index on the parent table\n> to no avail. Is there a way to formulate the query so that it uses the\n> index? Here is the general flavor:\n>\n> create table calls (caller text, ts timestamptz);\n> create table calls_partition_2009_08 (check (ts >= '2009-08-01' and ts <\n> '2009-09-01')) inherits (calls);\n> create index calls_partition_2009_08_ts on calls_partition_2009_08 (ts);\n> insert into calls_partition_2009_08 (ts)\n> select to_timestamp(unix_time)\n> from generate_series(extract(epoch from '2009-08-01'::timestamptz)::int,\n> extract(epoch from '2009-08-31\n> 23:59'::timestamptz)::int, 60) as unix_time;\n> analyze calls_partition_2009_08;\n> explain select min(ts) from calls;\n>\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Aggregate (cost=780.50..780.51 rows=1 width=8)\n> -> Append (cost=0.00..666.00 rows=45800 width=8)\n> -> Seq Scan on calls (cost=0.00..21.60 rows=1160 width=8)\n> -> Seq Scan on calls_partition_2009_08 calls (cost=0.00..644.40\n> rows=44640 width=8)\n> (4 rows)\n>\n> explain select min(ts) from calls_partition_2009_08;\n>\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.03..0.04 rows=1 width=0)\n> InitPlan\n> -> Limit (cost=0.00..0.03 rows=1 width=8)\n> -> Index Scan using calls_partition_2009_08_ts on\n> calls_partition_2009_08 (cost=0.00..1366.85 rows=44640 width=8)\n> Filter: (ts IS NOT NULL)\n> (5 rows)\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYep.... I ran into the exact same problem.My solution was to create a pl/pgsql function to query the child tables: ( http://archives.postgresql.org/pgsql-performance/2008-11/msg00284.php)\nIf you find a better solution please share.-Greg JamanOn Wed, Sep 2, 2009 at 1:15 PM, Kenneth Cox <[email protected]> wrote:\nWith postgresql-8.3.6, I have many partitions inheriting a table.  SELECT min() on the parent performs a Seq Scan, but SELECT min() on a child uses the index.  Is this another case where the planner is not aware enough to come up with the best plan?  I tried creating an index on the parent table to no avail.  Is there a way to formulate the query so that it uses the index?  Here is the general flavor:\n\ncreate table calls (caller text, ts timestamptz);\ncreate table calls_partition_2009_08 (check (ts >= '2009-08-01' and ts < '2009-09-01')) inherits (calls);\ncreate index calls_partition_2009_08_ts on calls_partition_2009_08 (ts);\ninsert into calls_partition_2009_08 (ts)\n  select to_timestamp(unix_time)\n    from generate_series(extract(epoch from '2009-08-01'::timestamptz)::int,\n                         extract(epoch from '2009-08-31 23:59'::timestamptz)::int, 60) as unix_time;\nanalyze calls_partition_2009_08;\nexplain select min(ts) from calls;\n\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Aggregate  (cost=780.50..780.51 rows=1 width=8)\n   ->  Append  (cost=0.00..666.00 rows=45800 width=8)\n         ->  Seq Scan on calls  (cost=0.00..21.60 rows=1160 width=8)\n         ->  Seq Scan on calls_partition_2009_08 calls  (cost=0.00..644.40 rows=44640 width=8)\n(4 rows)\n\nexplain select min(ts) from calls_partition_2009_08;\n\n                                                          QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.03..0.04 rows=1 width=0)\n   InitPlan\n     ->  Limit  (cost=0.00..0.03 rows=1 width=8)\n           ->  Index Scan using calls_partition_2009_08_ts on calls_partition_2009_08  (cost=0.00..1366.85 rows=44640 width=8)\n                 Filter: (ts IS NOT NULL)\n(5 rows)\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 2 Sep 2009 13:31:29 -0700", "msg_from": "Greg Jaman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition query using Seq Scan even when index is\n\tpresent" }, { "msg_contents": "On Wed, Sep 2, 2009 at 4:15 PM, Kenneth Cox<[email protected]> wrote:\n> With postgresql-8.3.6, I have many partitions inheriting a table.  SELECT\n> min() on the parent performs a Seq Scan, but SELECT min() on a child uses\n> the index.  Is this another case where the planner is not aware enough to\n> come up with the best plan?  I tried creating an index on the parent table\n> to no avail.  Is there a way to formulate the query so that it uses the\n> index?  Here is the general flavor:\n>\n> create table calls (caller text, ts timestamptz);\n> create table calls_partition_2009_08 (check (ts >= '2009-08-01' and ts <\n> '2009-09-01')) inherits (calls);\n> create index calls_partition_2009_08_ts on calls_partition_2009_08 (ts);\n> insert into calls_partition_2009_08 (ts)\n>  select to_timestamp(unix_time)\n>    from generate_series(extract(epoch from '2009-08-01'::timestamptz)::int,\n>                         extract(epoch from '2009-08-31\n> 23:59'::timestamptz)::int, 60) as unix_time;\n> analyze calls_partition_2009_08;\n> explain select min(ts) from calls;\n\nATM, constraint exclusion mainly only supports queries of the form:\nSELECT ... WHERE 'x', with x being an expression in the check\nconstraint. Table partitioning unfortunately is not a free lunch, you\nhave to be aware of it at all times when writing queries vs your\npartitioned tables.\n\nmerlin\n", "msg_date": "Thu, 3 Sep 2009 10:49:36 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition query using Seq Scan even when index is\n\tpresent" }, { "msg_contents": "Thank you, Greg! I tweaked your function to use recursion to search all \ninherited tables; my inheritance structure is two levels deep.\n\nThis function is for integers only; I will copy/waste to create one for \ntimestamps. Extra credit for anyone who can rewrite it to be polymorphic.\n\n-- Same as max(_colname) from _relname but much faster for inherited\n-- tables with an index on _colname. In postgresql-8.3.6 a naive query\n-- on a parent table will not use the indexes on the child tables.\ncreate or replace function partition_max_int(_relname text, _colname text) \nreturns int AS\n$$\ndeclare\n childtable RECORD;\n childres RECORD;\n maxval int;\n tmpval int;\n sql text;\nbegin\n -- find max in this table (only)\n sql := 'select max('||_colname||') from only '||quote_ident(_relname);\n execute sql into maxval;\n\n -- recurse to find max in descendants\n FOR childtable in\n select pc.relname as relname\n from pg_class pc\n join pg_inherits pi on pc.oid=pi.inhrelid\n where inhparent=(select oid from pg_class where relname=_relname)\n LOOP\n tmpval := partition_max_int(childtable.relname, _colname);\n IF tmpval is not NULL and (tmpval > maxval or maxval is null) THEN\n maxval := tmpval;\n END IF;\n END LOOP;\n\n return maxval;\nend;\n$$\nlanguage 'plpgsql' STABLE;\n", "msg_date": "Thu, 03 Sep 2009 12:13:36 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition query using Seq Scan even when index is\n present" } ]
[ { "msg_contents": "Would love to get some advice on how to change my conf settings / setup\nto get better I/O performance.\n\n \n\nServer Specs:\n\n \n\n2x Intel Xeon Quad Core (@2 Ghz - Clovertown,L5335)\n\n4GB RAM\n\n4x Seagate 73GB SAS HDD 10k RPM - in RAID ( stripped and mirrored )\n\n \n\nFreeBSD 6.4\n\nApache 2.2\n\nPostgreSQL 8.3.6\n\nPHP 5.2.9\n\n \n\n~1500 databases w/ ~60 tables each\n\n \n\nTotal I/O (these number are pretty constant throughout the day):\n\nReads: ~ 100 / sec for about 2.6 Mb/sec\n\nWrites: ~ 400 /sec for about 46.1Mb/sec\n\n \n\nConf settings:\n\n \n\nlisten_addresses = '*'\n\nmax_connections = 600\n\nssl = on\n\npassword_encryption = on\n\nshared_buffers = 1GB\n\nwork_mem = 5MB\n\nmaintenance_work_mem = 256MB\n\nmax_fsm_pages = 2800000\n\nmax_fsm_relations = 160000\n\nsynchronous_commit = off\n\ncheckpoint_segments = 6\n\ncheckpoint_warning = 30s\n\neffective_cache_size = 1GB\n\n \n\n \n\npg_stat_bgwriter:\n\n \n\ncheckpoints_timed: 16660\n\ncheckpoints_req: 1309\n\nbuffers_checkpoint: 656346\n\nbuffers_clean: 120922\n\nmaxwritten_clean: 1\n\nbuffers_backend: 167623\n\nbuffers_alloc: 472802349\n\n \n\nThis server also handles web traffic and PHP script processing.\n\n \n\nMost of the SQL happening is selects - very little inserts, updates and\ndeletes comparatively.\n\n \n\nI have noticed that most/all of the I/O activity is coming from the\nstats collector and autovacuum processes. Would turning off the stats\ncollector and autovacuum be helpeful / recommended? Could I change my\ncheckpoint_* or bgwriter_* conf values to help?\n\n \n\nLet me know if you need more information / stats.\n\n \n\nAny help would be much appreciated.\n\n \n\nThanks,\n\n \n\nScott Otis\n\nCIO / Lead Developer\n\nIntand\n\nwww.intand.com\n\n \n\n\nWould love to get some advice on how to change my conf settings / setup to get better I/O performance. Server Specs: 2x Intel Xeon Quad Core (@2 Ghz - Clovertown,L5335)4GB RAM4x Seagate 73GB SAS HDD 10k RPM – in RAID ( stripped and mirrored ) FreeBSD 6.4Apache 2.2PostgreSQL 8.3.6PHP 5.2.9 ~1500 databases w/ ~60 tables each Total I/O (these number are pretty constant throughout the day):Reads: ~ 100 / sec for about 2.6 Mb/secWrites: ~ 400 /sec for about 46.1Mb/sec Conf settings: listen_addresses = '*'max_connections = 600ssl = onpassword_encryption = onshared_buffers = 1GBwork_mem = 5MBmaintenance_work_mem = 256MBmax_fsm_pages = 2800000max_fsm_relations = 160000synchronous_commit = offcheckpoint_segments = 6checkpoint_warning = 30seffective_cache_size = 1GB  pg_stat_bgwriter: checkpoints_timed: 16660checkpoints_req: 1309buffers_checkpoint: 656346buffers_clean: 120922maxwritten_clean: 1buffers_backend: 167623buffers_alloc: 472802349 This server also handles web traffic and PHP script processing. Most of the SQL happening is selects – very little inserts, updates and deletes comparatively. I have noticed that most/all of the I/O activity is coming from the stats collector and autovacuum processes.  Would turning off the stats collector and autovacuum be helpeful / recommended?  Could I change my checkpoint_* or bgwriter_* conf values to help? Let me know if you need more information / stats. Any help would be much appreciated. Thanks, Scott OtisCIO / Lead DeveloperIntandwww.intand.com", "msg_date": "Wed, 2 Sep 2009 13:44:42 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "Scott Otis wrote:\n> Would love to get some advice on how to change my conf settings / setup \n> to get better I/O performance.\n> \n> Total I/O (these number are pretty constant throughout the day):\n> Reads: ~ 100 / sec for about 2.6 Mb/sec\n> Writes: ~ 400 /sec for about 46.1Mb/sec\n> \n> \n> Most of the SQL happening is selects � very little inserts, updates and \n> deletes comparatively.\n> \n\nMaybe I'm wrong, but those two don't seem to jive. You say its mostly selects, but you show higher writes per second.\n\nDoes freebsd have a vmstat or iostat? How did you get the numbers above? How's the cpu's look? (are they pegged?)\n\nThe io stats above seem low (reading 2 meg a second is a tiny fraction of what your system should be capable of). Have you tried a dd test?\n\n-Andy\n", "msg_date": "Thu, 03 Sep 2009 10:02:56 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high\n I/O on 8.3" }, { "msg_contents": "Scott Otis wrote:\n\n > 2x Intel Xeon Quad Core (@2 Ghz - Clovertown,L5335)\n > 4GB RAM\n > 4x Seagate 73GB SAS HDD 10k RPM – in RAID ( stripped and mirrored )\n\n> Would love to get some advice on how to change my conf settings / setup \n> to get better I/O performance.\n\n> ~1500 databases w/ ~60 tables each\n\nThis tells us nothing - size and complexity of databases is more \nimportant than their number.\n\n> Total I/O (these number are pretty constant throughout the day):\n> \n> Reads: ~ 100 / sec for about 2.6 Mb/sec\n> \n> Writes: ~ 400 /sec for about 46.1Mb/sec\n\nAgain, not enough information. How did you measure these? With iostat? \nAre those random reads or sequential? (i.e. what was the IO transaction \nsize?) Caching can explain why you have 4x more writes than reads, but \nit's still unusual, especially with the high write transfer rate you claim.\n\nIf random, you're doing ~~ 500 IOPS on a RAID10 array of 4 10 kRPM \ndrives, which is much more than you should - you're lucky you have the \nperformance you do.\n\nBy the way, why do you think your setup is slow? Is your application \nslow and you think your database is the reason?\n\n> shared_buffers = 1GB\n> \n> work_mem = 5MB\n> \n> maintenance_work_mem = 256MB\n\nOk.\n\n> synchronous_commit = off\n\nOk. Could be important if your IO is slow as yours is.\n\n> checkpoint_segments = 6\n\nYou could try raising this to 20, but I doubt it will help you that \nmuch. OTOH it won't hurt.\n\n> checkpoint_warning = 30s\n> \n> effective_cache_size = 1GB\n\nOk.\n\n> Most of the SQL happening is selects – very little inserts, updates and \n> deletes comparatively.\n\nAre you sure? Your write rate is a bit big for there to be very little \ninsert/update/delete activity.\n\n", "msg_date": "Thu, 03 Sep 2009 17:11:02 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "Scott Otis wrote:\n> I agree that they don't make sense - part of the reason I am looking for \n> help :)\n> \n> I am using iostat to get those numbers ( which I specify to average over \n> 5 min then collect to display in Cacti ).\n> \n> 2 processes are taking up a good deal of CPU - the postgres stats \n> collector and autovacuum ones. Both of those are using a lot of 1 core \n> each.\n> \n> I am not familiar with a dd test - what is that?\n> \n> Thanks,\n> \n> Scott\n> \n> On Sep 3, 2009, at 8:03 AM, \"Andy Colson\" <[email protected]> wrote:\n> \n>> Scott Otis wrote:\n>>> Would love to get some advice on how to change my conf settings / \n>>> setup to get better I/O performance.\n>>> Total I/O (these number are pretty constant throughout the day):\n>>> Reads: ~ 100 / sec for about 2.6 Mb/sec\n>>> Writes: ~ 400 /sec for about 46.1Mb/sec\n>>> Most of the SQL happening is selects – very little inserts, updates \n>>> and deletes comparatively.\n>>\n>> Maybe I'm wrong, but those two don't seem to jive. You say its mostly \n>> selects, but you show higher writes per second.\n>>\n>> Does freebsd have a vmstat or iostat? How did you get the numbers \n>> above? How's the cpu's look? (are they pegged?)\n>>\n>> The io stats above seem low (reading 2 meg a second is a tiny \n>> fraction of what your system should be capable of). Have you tried a \n>> dd test?\n>>\n>> -Andy\n\nPlease keep the list included so others may help.\n\n\nthe dd test:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n\nI think Ivan is right, the 2 meg a second is probably because most of the reads are from cache. But he and I looked at the writes differently. If we ignore the 400/sec, and just read 46 meg a second (assuming you meant megabyte and not megabit) then, that's pretty slow (for sequential writing) -- which the dd test will measure your sequential read and write speed.\n\nIvan asked a good question:\nBy the way, why do you think your setup is slow? Is your application slow and you think your database is the reason?\n\n\n-Andy\n", "msg_date": "Thu, 03 Sep 2009 12:12:37 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high\n I/O on 8.3" }, { "msg_contents": "Sorry about not responding to the whole list earlier - this is my first time posting to a mailing list.\r\n\r\nWould providing more information about the size and complexities of the databases help?\r\n\r\nI measure I/O stats with iostat - here is the command I use:\r\n\r\niostat -d -x mfid0 -t 290 2\r\n\r\nI tried looking at the man page for iostat but couldn't find anywhere how to determine what the stats are for sequential vs random - any help there?\r\n\r\nWhen using 'top -m io' the postgres stats collector process is constantly at 99% - 100%.\r\n\r\nWhen using 'top' the WCPU for the postgres stats collector and the autovacuum process are constantly at 20% - 21%.\r\n\r\nIs that normal? It seems to me that the stats collector is doing all the I/O (which would mean the stats collector is doing 46.1 megabytes /sec).\r\n\r\nAlso, the I/O stats don't change hardly at all (except at night during backups which makes sense). They don't go up or down with user activity on the server - which makes me wonder a little bit. I have a feeling that if I just turned off Apache that the I/O stats wouldn't change. Which leads me to believe that the I/O is not query related - its stats collecting and autovacuuming related. Is that expected?\r\n\r\nIt seems to me that the stats collector shouldn't be using that much I/O and CPU (and the autovacuum shouldn't be using that much CPU) - therefore something in my configuration must be messed up or could be changed somehow. But maybe I'm wrong - please let me know.\r\n\r\nI don't think my setup is necessarily slow. I just want to make it as efficient as possible and wanted to get some feedback to see if am setting things up right. I am also looking out into the future and seeing how much load I can put on this server before getting another one. If I can reduce the I/O and CPU that the stats collector and autovacuum are using without losing any functionality then I can put more load on the server.\r\n\r\nAgain thanks for all the help.\r\n\r\nScott Otis\r\nCIO / Lead Developer\r\nIntand\r\nwww.intand.com\r\n", "msg_date": "Thu, 3 Sep 2009 13:16:30 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "Scott Otis wrote:\n> Sorry about not responding to the whole list earlier - this is my first time posting to a mailing list.\n> \n> Would providing more information about the size and complexities of the databases help?\n> \n> I measure I/O stats with iostat - here is the command I use:\n> \n> iostat -d -x mfid0 -t 290 2\n> \n> I tried looking at the man page for iostat but couldn't find anywhere how to determine what the stats are for sequential vs random - any help there?\n> \n> When using 'top -m io' the postgres stats collector process is constantly at 99% - 100%.\n> \n> When using 'top' the WCPU for the postgres stats collector and the autovacuum process are constantly at 20% - 21%.\n> \n> Is that normal? It seems to me that the stats collector is doing all the I/O (which would mean the stats collector is doing 46.1 megabytes /sec).\n> \n> Also, the I/O stats don't change hardly at all (except at night during backups which makes sense). They don't go up or down with user activity on the server - which makes me wonder a little bit. I have a feeling that if I just turned off Apache that the I/O stats wouldn't change. Which leads me to believe that the I/O is not query related - its stats collecting and autovacuuming related. Is that expected?\n> \n> It seems to me that the stats collector shouldn't be using that much I/O and CPU (and the autovacuum shouldn't be using that much CPU) - therefore something in my configuration must be messed up or could be changed somehow. But maybe I'm wrong - please let me know.\n> \n> I don't think my setup is necessarily slow. I just want to make it as efficient as possible and wanted to get some feedback to see if am setting things up right. I am also looking out into the future and seeing how much load I can put on this server before getting another one. If I can reduce the I/O and CPU that the stats collector and autovacuum are using without losing any functionality then I can put more load on the server.\n> \n> Again thanks for all the help.\n> \n> Scott Otis\n> CIO / Lead Developer\n> Intand\n> www.intand.com\n> \n\n> When using 'top -m io' the postgres stats collector process is constantly at 99% - 100%.\n> When using 'top' the WCPU for the postgres stats collector and the autovacuum process are constantly at 20% - 21%.\n\nYeah, that sounds excessive. But my database gets 20 transactions a DAY, so, I have no experience with a busy box.\n\nYou say its mostly selects, but do you have any triggers or anything that might update a table? Do you do inserts or updates to track traffic?\n\nWhat does:\n\nselect * from pg_stat_activity\n\nlook like? (I think vacuum will show up in there, right?) I'm curious if we can find the table autovacuum is working on, maybe that'll help pin it down.\n\n-Andy\n", "msg_date": "Thu, 03 Sep 2009 16:09:08 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high\n I/O on 8.3" }, { "msg_contents": "On Thu, Sep 3, 2009 at 4:16 PM, Scott Otis<[email protected]> wrote:\n> Sorry about not responding to the whole list earlier - this is my first time posting to a mailing list.\n>\n> Would providing more information about the size and complexities of the databases help?\n>\n> I measure I/O stats with iostat - here is the command I use:\n>\n> iostat -d -x mfid0 -t 290 2\n>\n> I tried looking at the man page for iostat but couldn't find anywhere how to determine what the stats are for sequential vs random - any help there?\n>\n> When using 'top -m io' the postgres stats collector process is constantly at 99% - 100%.\n>\n> When using 'top' the WCPU for the postgres stats collector and the autovacuum process are constantly at 20% - 21%.\n>\n> Is that normal?  It seems to me that the stats collector is doing all the I/O (which would mean the stats collector is doing 46.1 megabytes /sec).\n>\n> Also, the I/O stats don't change hardly at all (except at night during backups which makes sense).  They don't go up or down with user activity on the server - which makes me wonder a little bit.  I have a feeling that if I just turned off Apache that the I/O stats wouldn't change.  Which leads me to believe that the I/O is not query related - its stats collecting and autovacuuming related.  Is that expected?\n>\n> It seems to me that the stats collector shouldn't be using that much I/O and CPU (and the autovacuum shouldn't be using that much CPU)  - therefore something in my configuration must be messed up or could be changed somehow.  But maybe I'm wrong - please let me know.\n>\n> I don't think my setup is necessarily slow.  I just want to make it as efficient as possible and wanted to get some feedback to see if am setting things up right.  I am also looking out into the future and seeing how much load I can put on this server before getting another one.  If I can reduce the I/O and CPU that the stats collector and autovacuum are using without losing any functionality then I can put more load on the server.\n>\n> Again thanks for all the help.\n\nCan you post to the list all the uncommented lines from your\npostgresql.conf file and attach the results of \"select * from\npg_stat_all_tables\" as an attachment?\n\n...Robert\n", "msg_date": "Thu, 3 Sep 2009 17:19:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O\n\ton 8.3" }, { "msg_contents": "Robert Haas wrote:\n> On Thu, Sep 3, 2009 at 4:16 PM, Scott Otis<[email protected]> wrote:\n>> Sorry about not responding to the whole list earlier - this is my first time posting to a mailing list.\n>>\n>> Would providing more information about the size and complexities of the databases help?\n>>\n>> I measure I/O stats with iostat - here is the command I use:\n>>\n>> iostat -d -x mfid0 -t 290 2\n>>\n>> I tried looking at the man page for iostat but couldn't find anywhere how to determine what the stats are for sequential vs random - any help there?\n>>\n>> When using 'top -m io' the postgres stats collector process is constantly at 99% - 100%.\n>>\n>> When using 'top' the WCPU for the postgres stats collector and the autovacuum process are constantly at 20% - 21%.\n>>\n>> Is that normal? It seems to me that the stats collector is doing all the I/O (which would mean the stats collector is doing 46.1 megabytes /sec).\n>>\n>> Also, the I/O stats don't change hardly at all (except at night during backups which makes sense). They don't go up or down with user activity on the server - which makes me wonder a little bit. I have a feeling that if I just turned off Apache that the I/O stats wouldn't change. Which leads me to believe that the I/O is not query related - its stats collecting and autovacuuming related. Is that expected?\n>>\n>> It seems to me that the stats collector shouldn't be using that much I/O and CPU (and the autovacuum shouldn't be using that much CPU) - therefore something in my configuration must be messed up or could be changed somehow. But maybe I'm wrong - please let me know.\n>>\n>> I don't think my setup is necessarily slow. I just want to make it as efficient as possible and wanted to get some feedback to see if am setting things up right. I am also looking out into the future and seeing how much load I can put on this server before getting another one. If I can reduce the I/O and CPU that the stats collector and autovacuum are using without losing any functionality then I can put more load on the server.\n>>\n>> Again thanks for all the help.\n> \n> Can you post to the list all the uncommented lines from your\n> postgresql.conf file and attach the results of \"select * from\n> pg_stat_all_tables\" as an attachment?\n> \n> ...Robert\n> \n\nThe first message he posted had this, and other info... Which is funny, because I almost asked the exact same question :-)\n\n\nFreeBSD 6.4\nApache 2.2\nPostgreSQL 8.3.6\nPHP 5.2.9\n\n \n~1500 databases w/ ~60 tables each\n \n\nConf settings:\n\nlisten_addresses = '*'\nmax_connections = 600\nssl = on\npassword_encryption = on\nshared_buffers = 1GB\nwork_mem = 5MB\nmaintenance_work_mem = 256MB\nmax_fsm_pages = 2800000\nmax_fsm_relations = 160000\nsynchronous_commit = off\ncheckpoint_segments = 6\ncheckpoint_warning = 30s\neffective_cache_size = 1GB\n \n \npg_stat_bgwriter:\n \ncheckpoints_timed: 16660\ncheckpoints_req: 1309\nbuffers_checkpoint: 656346\nbuffers_clean: 120922\nmaxwritten_clean: 1\nbuffers_backend: 167623\nbuffers_alloc: 472802349\n\n\n", "msg_date": "Thu, 03 Sep 2009 16:27:28 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high\n I/O on 8.3" }, { "msg_contents": "On Thu, Sep 3, 2009 at 5:27 PM, Andy Colson<[email protected]> wrote:\n> Robert Haas wrote:\n>>\n>> On Thu, Sep 3, 2009 at 4:16 PM, Scott Otis<[email protected]> wrote:\n>>>\n>>> Sorry about not responding to the whole list earlier - this is my first\n>>> time posting to a mailing list.\n>>>\n>>> Would providing more information about the size and complexities of the\n>>> databases help?\n>>>\n>>> I measure I/O stats with iostat - here is the command I use:\n>>>\n>>> iostat -d -x mfid0 -t 290 2\n>>>\n>>> I tried looking at the man page for iostat but couldn't find anywhere how\n>>> to determine what the stats are for sequential vs random - any help there?\n>>>\n>>> When using 'top -m io' the postgres stats collector process is constantly\n>>> at 99% - 100%.\n>>>\n>>> When using 'top' the WCPU for the postgres stats collector and the\n>>> autovacuum process are constantly at 20% - 21%.\n>>>\n>>> Is that normal?  It seems to me that the stats collector is doing all the\n>>> I/O (which would mean the stats collector is doing 46.1 megabytes /sec).\n>>>\n>>> Also, the I/O stats don't change hardly at all (except at night during\n>>> backups which makes sense).  They don't go up or down with user activity on\n>>> the server - which makes me wonder a little bit.  I have a feeling that if I\n>>> just turned off Apache that the I/O stats wouldn't change.  Which leads me\n>>> to believe that the I/O is not query related - its stats collecting and\n>>> autovacuuming related.  Is that expected?\n>>>\n>>> It seems to me that the stats collector shouldn't be using that much I/O\n>>> and CPU (and the autovacuum shouldn't be using that much CPU)  - therefore\n>>> something in my configuration must be messed up or could be changed somehow.\n>>>  But maybe I'm wrong - please let me know.\n>>>\n>>> I don't think my setup is necessarily slow.  I just want to make it as\n>>> efficient as possible and wanted to get some feedback to see if am setting\n>>> things up right.  I am also looking out into the future and seeing how much\n>>> load I can put on this server before getting another one.  If I can reduce\n>>> the I/O and CPU that the stats collector and autovacuum are using without\n>>> losing any functionality then I can put more load on the server.\n>>>\n>>> Again thanks for all the help.\n>>\n>> Can you post to the list all the uncommented lines from your\n>> postgresql.conf file and attach the results of \"select * from\n>> pg_stat_all_tables\" as an attachment?\n>>\n>> ...Robert\n>>\n>\n> The first message he posted had this, and other info... Which is funny,\n> because I almost asked the exact same question :-)\n>\n>\n> FreeBSD 6.4\n> Apache 2.2\n> PostgreSQL 8.3.6\n> PHP 5.2.9\n>\n>\n> ~1500 databases w/ ~60 tables each\n>\n>\n> Conf settings:\n>\n> listen_addresses = '*'\n> max_connections = 600\n> ssl = on\n> password_encryption = on\n> shared_buffers = 1GB\n> work_mem = 5MB\n> maintenance_work_mem = 256MB\n> max_fsm_pages = 2800000\n> max_fsm_relations = 160000\n> synchronous_commit = off\n> checkpoint_segments = 6\n> checkpoint_warning = 30s\n> effective_cache_size = 1GB\n>\n>\n> pg_stat_bgwriter:\n>\n> checkpoints_timed: 16660\n> checkpoints_req: 1309\n> buffers_checkpoint: 656346\n> buffers_clean: 120922\n> maxwritten_clean: 1\n> buffers_backend: 167623\n> buffers_alloc: 472802349\n\nYou're right - I missed that. But I still want to see pg_stat_all_tables.\n\nI wonder if it would be worth attaching strace to the stats collector\nand trying to get some idea what it's doing (if FreeBSD has\nstrace...).\n\n....Robert\n", "msg_date": "Thu, 3 Sep 2009 17:40:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O\n\ton 8.3" }, { "msg_contents": "2009/9/3 Scott Otis <[email protected]>:\n> Sorry about not responding to the whole list earlier - this is my first time posting to a mailing list.\n>\n> Would providing more information about the size and complexities of the databases help?\n>\n> I measure I/O stats with iostat - here is the command I use:\n>\n> iostat -d -x mfid0 -t 290 2\n\nSimply do \"iostat mfid0 1\" and post 10 lines of its output.\n\n> When using 'top -m io' the postgres stats collector process is constantly at 99% - 100%.\n\nIn itself it doesn't mean much. The number of IOs is important.\n\n> I don't think my setup is necessarily slow.  I just want to make it as efficient as possible and wanted to get some feedback to see if am setting things up right.  I am also looking out into the future and seeing how much load I can put on this server before getting another one.  If I can reduce the I/O and CPU that the stats collector and autovacuum are using without losing any functionality then I can put more load on the server.\n\nIn general it's tricky to optimize for unknown targets - if your\nperformance is OK right now, you should leave it alone.\n\nOn the other hand, your diagnosis of stats collector doing 46 MB/s\npoints to something very abnormal. You should probably post your\nentire postgresql.conf.\n\n-- \nf+rEnSIBITAhITAhLR1nM9F4cIs5KJrhbcsVtUIt7K1MhWJy1A==\n", "msg_date": "Thu, 3 Sep 2009 23:56:38 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O\n\ton 8.3" }, { "msg_contents": "> Simply do \"iostat mfid0 1\" and post 10 lines of its output.\r\n\r\n tty mfid0 cpu\r\n tin tout KB/t tps MB/s us ni sy in id\r\n 0 152 108.54 335 35.51 43 0 30 1 27\r\n 0 525 85.73 759 63.55 14 0 12 0 74\r\n 0 86 67.72 520 34.39 13 0 12 0 75\r\n 0 86 86.89 746 63.26 12 0 12 0 76\r\n 0 86 70.09 594 40.65 13 0 11 0 76\r\n 0 86 78.50 756 57.99 13 0 10 0 77\r\n 0 351 81.46 774 61.61 12 0 11 0 77\r\n 0 86 63.87 621 38.72 9 0 8 0 83\r\n 0 86 80.87 821 64.86 8 0 8 0 83\r\n 0 86 58.78 637 36.55 11 0 11 0 77\r\n\r\nScott\r\n\r\n\r\n\r\n", "msg_date": "Thu, 3 Sep 2009 15:51:03 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "> Can you post to the list all the uncommented lines from your\npostgresql.conf file and attach the results of \"select * from\npg_stat_all_tables\" as an attachment?\n\nI attached a CSV of \"select * from pg_stat_all_tables\" from one of our\nmore heavily used databases. Note: I turned off stats collection and\nautvacuuming a couple days ago to see what it would do and then\nrestarted postgres - I turned those back on this morning to that is why\nthere aren't more autovacuumed and autoanalyzed tables.\n\nSorry if this is a little verbose - I didn't want to leave anything out.\n\nUncommented lines from Postgresql.conf:\n\nlisten_addresses = '*'\nmax_connections = 600\nssl = on\npassword_encryption = on\nshared_buffers = 1GB\nwork_mem = 5MB\nmaintenance_work_mem = 256MB\nmax_fsm_pages = 2800000\nmax_fsm_relations = 160000\nsynchronous_commit = off\ncheckpoint_segments = 6\ncheckpoint_warning = 30s\neffective_cache_size = 1GB\nlog_destination = 'stderr'\nlogging_collector = on\nlog_directory = '/var/log/pgsql'\nlog_filename = '%m%d%y_%H%M%S-pgsql.log'\nlog_rotation_age = 1d\nlog_rotation_size = 10MB\nlog_min_messages = warning\nlog_error_verbosity = default\nlog_min_error_statement = warning\nsilent_mode = on\nlog_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '\nlog_temp_files = 0\ntrack_activities = on\ntrack_counts = on\nupdate_process_title = off\nlog_parser_stats = off\nlog_planner_stats = off\nlog_executor_stats = off\nlog_statement_stats = off\nautovacuum = on\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\ndefault_text_search_config = 'pg_catalog.english'", "msg_date": "Thu, 3 Sep 2009 16:11:13 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "On Thu, Sep 3, 2009 at 7:11 PM, Scott Otis<[email protected]> wrote:\n>> Can you post to the list all the uncommented lines from your\n> postgresql.conf file and attach the results of \"select * from\n> pg_stat_all_tables\" as an attachment?\n>\n> I attached a CSV of \"select * from pg_stat_all_tables\" from one of our\n> more heavily used databases.  Note: I turned off stats collection and\n> autvacuuming a couple days ago to see what it would do and then\n> restarted postgres - I turned those back on this morning to that is why\n> there aren't more autovacuumed and autoanalyzed tables.\n\nDo you by any chance have a bazillion databases in this cluster? Can\nyou do these?\n\nselect sum(1) from pg_database;\nselect pg_relation_size('pg_database');\nselect sum(pg_column_size(d.*)) from pg_database;\n\n...Robert\n", "msg_date": "Thu, 3 Sep 2009 23:05:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O\n\ton 8.3" }, { "msg_contents": "Claus Guttesen [email protected]:\n\n> > Would love to get some advice on how to change my conf settings / setup to\n> > get better I/O performance.\n> >\n> > Server Specs:\n> >\n> > 2x Intel Xeon Quad Core (@2 Ghz - Clovertown,L5335)\n> > 4GB RAM\n> > 4x Seagate 73GB SAS HDD 10k RPM - in RAID ( stripped and mirrored )\n> >\n> > FreeBSD 6.4\n> > Apache 2.2\n> > PostgreSQL 8.3.6\n> > PHP 5.2.9\n> >\n> > ~1500 databases w/ ~60 tables each\n> >\n> > max_connections = 600\n> > shared_buffers = 1GB\n\n> On a dual-core HP DL380 with 16 GB ram I have set shared_buffers at\n> 512 MB for 900 max_connections. Far the largest table have approx. 120\n> mill. records. You could try to lower shared_buffers.\n\n> > max_fsm_pages = 2800000\n> > max_fsm_relations = 160000\n\n> What does the last couple of lines from a 'vacuum analyze verbose'\n> say? I have max_fsm_pages = 4000000 and max_fsm_relations = 1500.\n\n> You can also try to lower random_page_cost to a lower value like 1.2\n> but I doubt this will help in your case.\n \nlast couple lines from 'vacuumdb -a -v -z':\n\nINFO: free space map contains 114754 pages in 42148 relations\nDETAIL: A total of 734736 page slots are in use (including overhead).\n734736 page slots are required to track all free space.\nCurrent limits are: 2800000 page slots, 160000 relations, using 26810 kB.\n\n\nScott\n\n\nRe: [PERFORM] Seeking performance advice and explanation for high I/O on 8.3\n\n\n\n\nClaus Guttesen [email protected]:\n> > Would love to get some advice on how to change my conf settings / setup to> > get better I/O performance.> >> > Server Specs:> >> > 2x Intel Xeon Quad Core (@2 Ghz - Clovertown,L5335)> > 4GB RAM> > 4x Seagate 73GB SAS HDD 10k RPM – in RAID ( stripped and mirrored )> >> > FreeBSD 6.4> > Apache 2.2> > PostgreSQL 8.3.6> > PHP 5.2.9> >> > ~1500 databases w/ ~60 tables each> >> > max_connections = 600> > shared_buffers = 1GB> On a dual-core HP DL380 with 16 GB ram I have set shared_buffers at> 512 MB for 900 max_connections. Far the largest table have approx. 120> mill. records. You could try to lower shared_buffers.> > max_fsm_pages = 2800000> > max_fsm_relations = 160000> What does the last couple of lines from a 'vacuum analyze verbose'> say? I have max_fsm_pages = 4000000 and max_fsm_relations = 1500.> You can also try to lower random_page_cost to a lower value like 1.2> but I doubt this will help in your case.\n \nlast couple lines from 'vacuumdb -a -v -z':\nINFO:  free space map contains 114754 pages in 42148 relationsDETAIL:  A total of 734736 page slots are in use (including overhead).734736 page slots are required to track all free space.Current limits are:  2800000 page slots, 160000 relations, using 26810 kB.\nScott", "msg_date": "Thu, 3 Sep 2009 23:34:04 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "Robert Haas <[email protected]>:\n \n> Do you by any chance have a bazillion databases in this cluster? Can\n> you do these?\n\n> select sum(1) from pg_database;\n \n1555 \n\n> select pg_relation_size('pg_database');\n \n221184\n\n> select sum(pg_column_size(d.*)) from pg_database;\n \nThat gave me:\n \nERROR: missing FROM-clause entry for table \"d\"\nLINE 1: select sum(pg_column_size(d.*)) from pg_database;\n\nSo I did this: \n \nselect sum(pg_column_size(d.*)) from pg_database as d;\n \nand got:\n \n192910\n \nAlso did this:\n \nselect sum(pg_database_size(datname)) from pg_database;\n \nand got:\n \n13329800428 (12.4GB)\n \nScott\n \n\nRe: [PERFORM] Seeking performance advice and explanation for high I/O on 8.3\n\n\n\n\nRobert Haas <[email protected]>:\n \n> Do you by any chance have a bazillion databases in this cluster?  Can> you do these?> select sum(1) from pg_database;\n\n 1555\n> select pg_relation_size('pg_database');\n \n221184\n> select sum(pg_column_size(d.*)) from pg_database;\n \nThat gave me:\n \nERROR:  missing FROM-clause entry for table \"d\"LINE 1: select sum(pg_column_size(d.*)) from pg_database;\nSo I did this: \n \nselect sum(pg_column_size(d.*)) from pg_database as d;\n \nand got:\n \n192910\n \nAlso did this:\n \nselect sum(pg_database_size(datname)) from pg_database;\n \nand got:\n \n13329800428 (12.4GB)\n \nScott", "msg_date": "Thu, 3 Sep 2009 23:54:53 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": ">> > max_fsm_pages = 2800000\n>> > max_fsm_relations = 160000\n>\n>> What does the last couple of lines from a 'vacuum analyze verbose'\n>> say? I have max_fsm_pages = 4000000 and max_fsm_relations = 1500.\n>\n>> You can also try to lower random_page_cost to a lower value like 1.2\n>> but I doubt this will help in your case.\n>\n> last couple lines from 'vacuumdb -a -v -z':\n>\n> INFO:  free space map contains 114754 pages in 42148 relations\n> DETAIL:  A total of 734736 page slots are in use (including overhead).\n\n----------------vvvvv-----------\n> 734736 page slots are required to track all free space.\n----------------^^^^^-----------\n\n> Current limits are:  2800000 page slots, 160000 relations, using 26810 kB.\n\nYou can lower your max_fsm_pages setting to a number above 'xyz page\nslots required ...' to 1000000 and fsm-relations to like 50000.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 4 Sep 2009 08:59:13 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O\n\ton 8.3" }, { "msg_contents": "On Fri, Sep 4, 2009 at 08:54, Scott Otis<[email protected]> wrote:\n> Robert Haas <[email protected]>:\n>\n>> Do you by any chance have a bazillion databases in this cluster?  Can\n>> you do these?\n>\n>> select sum(1) from pg_database;\n>\n> 1555\n\nNote that there are two features in 8.4 specifically designed to deal\nwith the situation where you have lots of databases and/or lots of\ntables (depending on how many tables you have in each database, this\nwould definitely qualify). They both deal with the \"pgstats temp file\ntoo large generating i/o issue\".\n\nFirst, it will only write the file when it's actually necessary - 8.3\nand earlier will always write it.\n\nSecond, you will have the ability to move the location of the file to\na different filesystem - specifically intended so that you can move it\noff to a ramdrive.\n\nCould be worth investigating an upgrade for this issue alone. The fact\nthat you don't have to struggle with tuning the FSM in 8.4 is another\nthing that makes life a *lot* easier in this kind of installations.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Fri, 4 Sep 2009 10:18:57 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for high I/O\n\ton 8.3" }, { "msg_contents": "So is there anything I can do in 8.3 to help this? I have tried setting ' track_activities', 'track_counts' and 'autovacuum' to 'off' (which has reduced CPU and I/O a bit) - but the stats collector process is still using up a good deal of CPU and I/O - is there any way to turn stats collecting completely off?\n\nScott Otis\nCIO / Lead Developer\nIntand\nwww.intand.com\n\n\n-----Original Message-----\nFrom: Magnus Hagander [mailto:[email protected]] \nSent: Friday, September 04, 2009 1:19 AM\nTo: Scott Otis\nCc: Robert Haas; Ivan Voras; [email protected]\nSubject: Re: [PERFORM] Seeking performance advice and explanation for high I/O on 8.3\n\nOn Fri, Sep 4, 2009 at 08:54, Scott Otis<[email protected]> wrote:\n> Robert Haas <[email protected]>:\n>\n>> Do you by any chance have a bazillion databases in this cluster?  Can \n>> you do these?\n>\n>> select sum(1) from pg_database;\n>\n> 1555\n\nNote that there are two features in 8.4 specifically designed to deal with the situation where you have lots of databases and/or lots of tables (depending on how many tables you have in each database, this would definitely qualify). They both deal with the \"pgstats temp file too large generating i/o issue\".\n\nFirst, it will only write the file when it's actually necessary - 8.3 and earlier will always write it.\n\nSecond, you will have the ability to move the location of the file to a different filesystem - specifically intended so that you can move it off to a ramdrive.\n\nCould be worth investigating an upgrade for this issue alone. The fact that you don't have to struggle with tuning the FSM in 8.4 is another thing that makes life a *lot* easier in this kind of installations.\n\n\n--\n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Fri, 4 Sep 2009 14:55:50 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeking performance advice and explanation for high I/O on 8.3" }, { "msg_contents": "\"Scott Otis\" <[email protected]> wrote:\n \n> So is there anything I can do in 8.3 to help this? I have tried\n> setting 'track_activities', 'track_counts' and 'autovacuum' to 'off'\n> (which has reduced CPU and I/O a bit)\n \nYou're going to regret that very soon, unless you are *very* sure you\nhave adequate manual vacuums scheduled.\n \nhttp://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html\n \n-Kevin\n", "msg_date": "Fri, 04 Sep 2009 17:09:37 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeking performance advice and explanation for\n\t high I/O on 8.3" } ]
[ { "msg_contents": "I am new to PostgreSQL and I am evaluating it for use as a data\nwarehouse. I am really struggling to get a simple query to perform\nwell. I have put the appropriate indexes on the table (at least they\nare appropriate from my use with several other RDBMS's). However, the\nquery doesn't perform well, and I'm not sure how to get it to return in\nreasonable amount of time. Right now the query takes between 2 - 3\nminutes to return. There are about 39 million rows in the table. Here\nis all of the information that I have. Please let me know if you I have\ndone anything wrong or what needs to change.\n\n \n\nThanks,\n\nMark\n\n \n\nTable Definition:\n\nCREATE TABLE temp_inventory_fact\n\n(\n\n item_id integer NOT NULL,\n\n date_id timestamp with time zone NOT NULL,\n\n \"CBL_Key\" integer NOT NULL,\n\n product_group_id integer NOT NULL,\n\n supplier_id numeric(19) NOT NULL,\n\n \"Cost\" numeric(19,9) NOT NULL,\n\n qty_on_hand numeric(19,9) NOT NULL,\n\n qty_allocated numeric(19,9) NOT NULL,\n\n qty_backordered numeric(19,9) NOT NULL,\n\n qty_on_po numeric(19,9) NOT NULL,\n\n qty_in_transit numeric(19,9) NOT NULL,\n\n qty_reserved numeric(19,9) NOT NULL,\n\n nonstock_id boolean NOT NULL\n\n)\n\nWITH (\n\n OIDS=FALSE\n\n);\n\n \n\nQuery:\n\nselect product_group_id, SUM(\"Cost\")\n\nFROM temp_inventory_Fact\n\nwhere product_group_id < 100\n\ngroup by product_group_id\n\norder by product_group_id\n\nlimit 50;\n\n \n\nIndexes on table:\n\nCREATE INDEX idx_temp_inventory_fact_product_cost ON temp_inventory_fact\n(product_group_id, \"Cost\");\n\nCREATE INDEX idx_temp_inventory_fact_product ON temp_inventory_fact\n(product_group_id);\n\n\n\n\n\n\n\n\n\n\n\nI am new to PostgreSQL and I am evaluating it for use as a\ndata  warehouse.  I am really struggling to get a simple query to\nperform well.  I have put the appropriate indexes on the table (at least\nthey are appropriate from my use with several other RDBMS’s). \nHowever, the query doesn’t perform well, and I’m not sure how to\nget it to return in reasonable amount of time.  Right now the query takes\nbetween 2 – 3 minutes to return.  There are about 39 million rows in\nthe table. Here is all of the information that I have.  Please let me know\nif you I have done anything wrong or what needs to change.\n \nThanks,\nMark\n \nTable Definition:\nCREATE TABLE temp_inventory_fact\n(\n  item_id integer NOT NULL,\n  date_id timestamp with time zone NOT NULL,\n  \"CBL_Key\" integer NOT NULL,\n  product_group_id integer NOT NULL,\n  supplier_id numeric(19) NOT NULL,\n  \"Cost\" numeric(19,9) NOT NULL,\n  qty_on_hand numeric(19,9) NOT NULL,\n  qty_allocated numeric(19,9) NOT NULL,\n  qty_backordered numeric(19,9) NOT NULL,\n  qty_on_po numeric(19,9) NOT NULL,\n  qty_in_transit numeric(19,9) NOT NULL,\n  qty_reserved numeric(19,9) NOT NULL,\n  nonstock_id boolean NOT NULL\n)\nWITH (\n  OIDS=FALSE\n);\n \nQuery:\nselect product_group_id, SUM(\"Cost\")\nFROM temp_inventory_Fact\nwhere product_group_id < 100\ngroup by product_group_id\norder by product_group_id\nlimit 50;\n \nIndexes on table:\nCREATE INDEX idx_temp_inventory_fact_product_cost ON\ntemp_inventory_fact (product_group_id, \"Cost\");\nCREATE INDEX idx_temp_inventory_fact_product ON\ntemp_inventory_fact (product_group_id);", "msg_date": "Thu, 3 Sep 2009 09:33:10 -0400", "msg_from": "Mark Starkman <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL not using index for statement with group by" }, { "msg_contents": "Mark Starkman <[email protected]> wrote:\n \n> I'm not sure how to get it to return in\n> reasonable amount of time.\n \nSome more information could help.\n \nWhat version of PostgreSQL is this?\n \nPlease give an overview of the hardware and OS.\n \nPlease show your postgresql.conf file, excluding comments.\n \nPlease run your query with EXPLAIN ANALYZE in front, so we can see the\nexecution plan, with cost estimates compared to actual information. \nIf the the plan indicates a sequential scan, and you think an indexed\nscan may be faster, you might be able to coerce it into the indexed\nplan for diagnostic purposes by running this on the connection before\nan EXPLAIN ANALYZE run:\n \nset enable_seqscan = off;\n \nYou don't want to leave it off, or try to use that in production, but\nit might be useful in figuring out what's going on.\n \nThat might be enough to diagnose the issue.\n \n-Kevin\n", "msg_date": "Thu, 03 Sep 2009 17:03:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL not using index for statement with\n\t group by" }, { "msg_contents": "On Thu, Sep 3, 2009 at 7:33 AM, Mark Starkman<[email protected]> wrote:\n> I am new to PostgreSQL and I am evaluating it for use as a data  warehouse.\n> I am really struggling to get a simple query to perform well.  I have put\n> the appropriate indexes on the table (at least they are appropriate from my\n> use with several other RDBMS’s).\n\nOk, first things first. Pgsql isn't like most other dbms. It's\nindexes do not contain visibility info, which means that if the db\nwere to use the indexes to look up entries in a table, it still has to\ngo back to the table to look those values up to see if they are\nvisible to the current transation.\n\nSo, if you're retrieving a decent percentage of the table, it's\ncheaper to just hit the table. Note that this makes PostgreSQL poorly\nsuited for very wide tables.\n\nGenerally the trick to making large accesses run fast in pgsql is to\nincrease work_mem. But some queries just aren't efficient in pgsql\nthat can be efficient in other dbs.\n\nPossibly clustering on product_group_id would help.\n", "msg_date": "Thu, 3 Sep 2009 20:10:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL not using index for statement with group by" } ]
[ { "msg_contents": "\n\nSo as I understand, what you need is an online database program able to\nperform ETL tasks, that works in the cloud. \n\nThere are a few companies out there able to perform what you are asking.\nWhat I could propose is a company called Talend. With Talend On Demand. \n\nThis solution is based on the open source Talend Open Studio. You are\noffered a collaborative platform to work on, meaning that all your teams in\ndifferent countries will be working on the same database on a secured web\nservice. \n\nGo check it out on the website: \nhttp://www.talend.com/talend-on-demand/talend-on-demand.php . Hope this\nhelps.\n\n\n\n\n\n\n\n\nRstat wrote:\n> \n> \n> Hi all, \n> \n> We are a young, new company on the market. We are starting to open up new\n> markets in other countries (Europe). \n> \n> It somewhat is a challenge for us: we can't share our data and mysql\n> database between all our different services. So here is my question: do\n> you think it would be possible to find an ETL program that could work in\n> the cloud? \n> \n> It would not have to be too complex, but sturdy and working as a Software\n> as a Service. \n> \n> Thanks a lot for your help.\n> \n\n-- \nView this message in context: http://www.nabble.com/SAAS-and-MySQL-tp25258395p25276553.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 3 Sep 2009 06:52:53 -0700 (PDT)", "msg_from": "Tguru <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAAS and MySQL" }, { "msg_contents": "On Thu, Sep 3, 2009 at 9:52 AM, Tguru<[email protected]> wrote:\n>\n>\n> So as I understand, what you need is an online database program able to\n> perform ETL tasks, that works in the cloud.\n>\n> There are a few companies out there able to perform what you are asking.\n> What I could propose is a company called Talend. With Talend On Demand.\n>\n> This solution is based on the open source Talend Open Studio. You are\n> offered a collaborative platform to work on, meaning that all your teams in\n> different countries will be working on the same database on a secured web\n> service.\n>\n> Go check it out on the website:\n> http://www.talend.com/talend-on-demand/talend-on-demand.php . Hope this\n> helps.\n>\n>\n>\n>\n>\n>\n>\n>\n> Rstat wrote:\n>>\n>>\n>> Hi all,\n>>\n>> We are a young, new company on the market. We are starting to open up new\n>> markets in other countries (Europe).\n>>\n>> It somewhat is a challenge for us: we can't share our data and mysql\n>> database between all our different services. So here is my question: do\n>> you think it would be possible to find an ETL program that could work in\n>> the cloud?\n>>\n>> It would not have to be too complex, but sturdy and working as a Software\n>> as a Service.\n>>\n>> Thanks a lot for your help.\n>>\n>\n\nhuh?\n\nmerlin\n", "msg_date": "Thu, 3 Sep 2009 19:33:56 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAAS and MySQL" } ]
[ { "msg_contents": "Hi list,\n\nI've been running this simple delete since yesterday afternoon :\n> db=# explain delete from message where datetime < '2009-03-03';\n> Seq Scan on message (cost=0.00..34131.95 rows=133158 width=6)\n> Filter: (datetime < '2009-03-03 00:00:00'::timestamp without time zone)\n\nThere is no index on that column, so a seqscan is fine. But it really \nshouldn't take > 15 hours to delete :\n\n> db=# select count(*) from message where datetime < '2009-03-03';\n> 184368\n> Time: 751.721 ms\n>\n> db=# select count(*) from message;\n> 1079463\n> Time: 593.899 ms\n>\n> db=# select pg_size_pretty(pg_relation_size('message')); \n> 161 MB\n> Time: 96.062 ms\n>\n> db=# \\o /dev/null \n> db=# select * from message where datetime < '2009-03-03';\n> Time: 4975.123 ms\n\n\nMost of the time, there is no other connection to that database. This is on an \noldish laptop. atop reports 100% cpu and about 24KB/s of writes for postgres. \nMachine is mostly idle (although I did run a multi-hours compile during the \nnight). Nothing looks wrong in postgres logs.\n\nPostgreSQL 8.3.7 on i686-pc-linux-gnu, compiled by GCC i686-pc-linux-gnu-gcc \n(Gentoo 4.3.2-r3 p1.6, pie-10.1.5) 4.3.2\n\npostgresql.conf :\n> max_connections = 100\n> shared_buffers = 24MB\n> max_fsm_pages = 153600\n> log_destination = 'stderr'\n> logging_collector = on\n> log_directory = '/var/log/postgres/'\n> log_filename = '%Y-%m-%d_%H%M%S.log'\n> log_rotation_size = 100MB\n> log_min_duration_statement = 30000\n> log_line_prefix = '%t %d %p '\n> datestyle = 'iso, mdy'\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n> default_text_search_config = 'pg_catalog.english'\n\n\nNot sure what to look at to debug this further (I could work around the \nproblem with pg_dump + grep, but that's beside the point). Any idea ?\n\n\nThanks.\n\n-- \nVincent de Phily\nMobile Devices\n+33 (0) 666 301 306\n+33 (0) 142 119 325\n\nWarning\nThis message (and any associated files) is intended only for the use of its\nintended recipient and may contain information that is confidential, subject\nto copyright or constitutes a trade secret. If you are not the intended\nrecipient you are hereby notified that any dissemination, copying or\ndistribution of this message, or files associated with this message, is\nstrictly prohibited. If you have received this message in error, please\nnotify us immediately by replying to the message and deleting it from your\ncomputer. Any views or opinions presented are solely those of the author\[email protected] and do not necessarily represent those of \nthe\ncompany. Although the company has taken reasonable precautions to ensure no\nviruses are present in this email, the company cannot accept responsibility\nfor any loss or damage arising from the use of this email or attachments.\n", "msg_date": "Fri, 4 Sep 2009 12:39:21 +0200", "msg_from": "Vincent de Phily <[email protected]>", "msg_from_op": true, "msg_subject": "slow query : very simple delete, 100% cpu, nearly no disk activity" }, { "msg_contents": "Vincent de Phily <[email protected]> writes:\n> I've been running this simple delete since yesterday afternoon :\n>> db=# explain delete from message where datetime < '2009-03-03';\n>> Seq Scan on message (cost=0.00..34131.95 rows=133158 width=6)\n>> Filter: (datetime < '2009-03-03 00:00:00'::timestamp without time zone)\n\n> There is no index on that column, so a seqscan is fine. But it really \n> shouldn't take > 15 hours to delete :\n\n99% of the time, the reason a delete takes way longer than it seems like\nit should is trigger firing time. In particular, foreign key triggers\nwhere you don't have an index on the referencing column. Are there\nany foreign keys linking to this table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Sep 2009 21:25:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query : very simple delete, 100% cpu,\n\tnearly no disk activity" }, { "msg_contents": "On Monday 07 September 2009 03:25:23 Tom Lane wrote:\n> Vincent de Phily <[email protected]> writes:\n> > I've been running this simple delete since yesterday afternoon :\n> >> db=# explain delete from message where datetime < '2009-03-03';\n> >> Seq Scan on message (cost=0.00..34131.95 rows=133158 width=6)\n> >> Filter: (datetime < '2009-03-03 00:00:00'::timestamp without time zone)\n> >\n> > There is no index on that column, so a seqscan is fine. But it really\n> > shouldn't take > 15 hours to delete :\n>\n> 99% of the time, the reason a delete takes way longer than it seems like\n> it should is trigger firing time. In particular, foreign key triggers\n> where you don't have an index on the referencing column. Are there\n> any foreign keys linking to this table?\n\nYes, but they look fine to me (?). Only one FK references the table; it's an \ninternal reference :\n\n Table \"public.message\"\n Column | Type | Modifiers\n-----------+-----------------------------+------------------------------------------------------\n id | integer | not null default \nnextval('message_id_seq'::regclass)\n unitid | integer | not null\n userid | integer |\n refid | integer |\n(...)\nIndexes:\n \"message_pkey\" PRIMARY KEY, btree (id)\n \"message_unitid_fromto_status_idx\" btree (unitid, fromto, status)\n \"message_userid_idx\" btree (userid)\nForeign-key constraints:\n \"message_refid_fkey\" FOREIGN KEY (refid) REFERENCES message(id) ON UPDATE \nCASCADE ON DELETE CASCADE\n \"message_unitid_fkey\" FOREIGN KEY (unitid) REFERENCES units(id) ON UPDATE \nCASCADE ON DELETE CASCADE\n \"message_userid_fkey\" FOREIGN KEY (userid) REFERENCES users(id) ON UPDATE \nCASCADE ON DELETE CASCADE\n\n Table \"public.units\"\n Column | Type | Modifiers\n-------------+-----------------------------+----------------------------------------------------\n id | integer | not null default \nnextval('units_id_seq'::regclass)\n(...)\nIndexes:\n \"units_pkey\" PRIMARY KEY, btree (id)\n \"units_modid_ukey\" UNIQUE, btree (modid)\n \"units_profileid_idx\" btree (profileid)\nForeign-key constraints:\n \"units_profileid_fkey\" FOREIGN KEY (profileid) REFERENCES profiles(id) ON \nUPDATE CASCADE ON DELETE RESTRICT\n\n Table \"public.users\"\n Column | Type | Modifiers\n----------+-----------------------+----------------------------------------------------\n id | integer | not null default \nnextval('users_id_seq'::regclass)\n(...)\nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id)\n \"users_login_ukey\" UNIQUE, btree (login)\n\n\nTable users has a handdull of rows, table units has around 40000. 43% of \nmessage.refid is NULL.\n\nThe delete finished during the weekend (DELETE 184368). Nothing in the logs \nexcept the duration time (103113291.307 ms). I took a db dump before the \ndelete finished, in order to be able to reproduce the issue (a 30min test \nshows me it is still slow).\n\n-- \nVincent de Phily\nMobile Devices\n+33 (0) 666 301 306\n+33 (0) 142 119 325\n\nWarning\nThis message (and any associated files) is intended only for the use of its\nintended recipient and may contain information that is confidential, subject\nto copyright or constitutes a trade secret. If you are not the intended\nrecipient you are hereby notified that any dissemination, copying or\ndistribution of this message, or files associated with this message, is\nstrictly prohibited. If you have received this message in error, please\nnotify us immediately by replying to the message and deleting it from your\ncomputer. Any views or opinions presented are solely those of the author\[email protected] and do not necessarily represent those of \nthe\ncompany. Although the company has taken reasonable precautions to ensure no\nviruses are present in this email, the company cannot accept responsibility\nfor any loss or damage arising from the use of this email or attachments.\n", "msg_date": "Mon, 7 Sep 2009 11:05:02 +0200", "msg_from": "Vincent de Phily <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query : very simple delete, 100% cpu,\n nearly no disk activity" }, { "msg_contents": "On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily\n<[email protected]> wrote:\n> On Monday 07 September 2009 03:25:23 Tom Lane wrote:\n>> Vincent de Phily <[email protected]> writes:\n>> > I've been running this simple delete since yesterday afternoon :\n>> >> db=# explain delete from message where datetime < '2009-03-03';\n>> >> Seq Scan on message  (cost=0.00..34131.95 rows=133158 width=6)\n>> >> Filter: (datetime < '2009-03-03 00:00:00'::timestamp without time zone)\n>> >\n>> > There is no index on that column, so a seqscan is fine. But it really\n>> > shouldn't take > 15 hours to delete :\n>>\n>> 99% of the time, the reason a delete takes way longer than it seems like\n>> it should is trigger firing time.  In particular, foreign key triggers\n>> where you don't have an index on the referencing column.  Are there\n>> any foreign keys linking to this table?\n>\n> Yes, but they look fine to me (?). Only one FK references the table; it's an\n> internal reference :\n>\n>                                     Table \"public.message\"\n>  Column   |            Type             |                      Modifiers\n> -----------+-----------------------------+------------------------------------------------------\n>  id        | integer                     | not null default\n> nextval('message_id_seq'::regclass)\n>  unitid    | integer                     | not null\n>  userid    | integer                     |\n>  refid     | integer                     |\n> (...)\n> Indexes:\n>    \"message_pkey\" PRIMARY KEY, btree (id)\n>    \"message_unitid_fromto_status_idx\" btree (unitid, fromto, status)\n>    \"message_userid_idx\" btree (userid)\n> Foreign-key constraints:\n>    \"message_refid_fkey\" FOREIGN KEY (refid) REFERENCES message(id) ON UPDATE\n> CASCADE ON DELETE CASCADE\n>    \"message_unitid_fkey\" FOREIGN KEY (unitid) REFERENCES units(id) ON UPDATE\n> CASCADE ON DELETE CASCADE\n>    \"message_userid_fkey\" FOREIGN KEY (userid) REFERENCES users(id) ON UPDATE\n> CASCADE ON DELETE CASCADE\n>\n>                                      Table \"public.units\"\n>   Column    |            Type             |                     Modifiers\n> -------------+-----------------------------+----------------------------------------------------\n>  id          | integer                     | not null default\n> nextval('units_id_seq'::regclass)\n> (...)\n> Indexes:\n>    \"units_pkey\" PRIMARY KEY, btree (id)\n>    \"units_modid_ukey\" UNIQUE, btree (modid)\n>    \"units_profileid_idx\" btree (profileid)\n> Foreign-key constraints:\n>    \"units_profileid_fkey\" FOREIGN KEY (profileid) REFERENCES profiles(id) ON\n> UPDATE CASCADE ON DELETE RESTRICT\n>\n>                                 Table \"public.users\"\n>  Column  |         Type          |                     Modifiers\n> ----------+-----------------------+----------------------------------------------------\n>  id       | integer               | not null default\n> nextval('users_id_seq'::regclass)\n> (...)\n> Indexes:\n>    \"users_pkey\" PRIMARY KEY, btree (id)\n>    \"users_login_ukey\" UNIQUE, btree (login)\n>\n>\n> Table users has a handdull of rows, table units has around 40000. 43% of\n> message.refid is NULL.\n>\n> The delete finished during the weekend (DELETE 184368). Nothing in the logs\n> except the duration time (103113291.307 ms). I took a db dump before the\n> delete finished, in order to be able to reproduce the issue (a 30min test\n> shows me it is still slow).\n\nI would try EXPLAIN ANALYZE DELETE ... with a query that is modified\nso as to delete only a handful of rows. That will report the amount\nof time spent in triggers vs. the main query, which will help you\nassess whether your conclusion that the foreign keys are OK is\ncorrect.\n\n...Robert\n", "msg_date": "Fri, 11 Sep 2009 17:30:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query : very simple delete, 100% cpu, nearly no\n\tdisk activity" }, { "msg_contents": "On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily\n<[email protected]> wrote:\n> On Monday 07 September 2009 03:25:23 Tom Lane wrote:\n>> Vincent de Phily <[email protected]> writes:\n>> > I've been running this simple delete since yesterday afternoon :\n>> >> db=# explain delete from message where datetime < '2009-03-03';\n>> >> Seq Scan on message  (cost=0.00..34131.95 rows=133158 width=6)\n>> >> Filter: (datetime < '2009-03-03 00:00:00'::timestamp without time zone)\n>> >\n>> > There is no index on that column, so a seqscan is fine. But it really\n>> > shouldn't take > 15 hours to delete :\n>>\n>> 99% of the time, the reason a delete takes way longer than it seems like\n>> it should is trigger firing time.  In particular, foreign key triggers\n>> where you don't have an index on the referencing column.  Are there\n>> any foreign keys linking to this table?\n>\n> Yes, but they look fine to me (?). Only one FK references the table; it's an\n> internal reference :\n>\n>                                     Table \"public.message\"\n>  Column   |            Type             |                      Modifiers\n> -----------+-----------------------------+------------------------------------------------------\n>  id        | integer                     | not null default\n> nextval('message_id_seq'::regclass)\n>  unitid    | integer                     | not null\n>  userid    | integer                     |\n>  refid     | integer                     |\n\n> Indexes:\n>    \"message_pkey\" PRIMARY KEY, btree (id)\n>    \"message_unitid_fromto_status_idx\" btree (unitid, fromto, status)\n>    \"message_userid_idx\" btree (userid)\n> Foreign-key constraints:\n>    \"message_refid_fkey\" FOREIGN KEY (refid) REFERENCES message(id) ON UPDATE\n> CASCADE ON DELETE CASCADE\n>    \"message_unitid_fkey\" FOREIGN KEY (unitid) REFERENCES units(id) ON UPDATE\n> CASCADE ON DELETE CASCADE\n>    \"message_userid_fkey\" FOREIGN KEY (userid) REFERENCES users(id) ON UPDATE\n> CASCADE ON DELETE CASCADE\n\nwhere is the index on refid?\n\nmerlin\n", "msg_date": "Fri, 11 Sep 2009 17:55:09 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query : very simple delete, 100% cpu, nearly no\n\tdisk activity" }, { "msg_contents": "On Friday 11 September 2009 23:55:09 Merlin Moncure wrote:\n> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily\n> <[email protected]> wrote: \n> >                                     Table \"public.message\"\n> >  Column   |            Type             |                      Modifiers\n> > -----------+-----------------------------+-------------------------------\n> >----------------------- id        | integer                     | not null\n> > default\n> > nextval('message_id_seq'::regclass)\n> >  unitid    | integer                     | not null\n> >  userid    | integer                     |\n> >  refid     | integer                     |\n> >\n> > Indexes:\n> >    \"message_pkey\" PRIMARY KEY, btree (id)\n> >    \"message_unitid_fromto_status_idx\" btree (unitid, fromto, status)\n> >    \"message_userid_idx\" btree (userid)\n> > Foreign-key constraints:\n> >    \"message_refid_fkey\" FOREIGN KEY (refid) REFERENCES message(id) ON\n> > UPDATE CASCADE ON DELETE CASCADE\n> >    \"message_unitid_fkey\" FOREIGN KEY (unitid) REFERENCES units(id) ON\n> > UPDATE CASCADE ON DELETE CASCADE\n> >    \"message_userid_fkey\" FOREIGN KEY (userid) REFERENCES users(id) ON\n> > UPDATE CASCADE ON DELETE CASCADE\n>\n> where is the index on refid?\n\nIt's\n\"message_pkey\" PRIMARY KEY, btree (id)\nbecause\n(refid) REFERENCES message(id)\n\n\n-- \nVincent de Phily\nMobile Devices\n+33 (0) 666 301 306\n+33 (0) 142 119 325\n\nWarning\nThis message (and any associated files) is intended only for the use of its\nintended recipient and may contain information that is confidential, subject\nto copyright or constitutes a trade secret. If you are not the intended\nrecipient you are hereby notified that any dissemination, copying or\ndistribution of this message, or files associated with this message, is\nstrictly prohibited. If you have received this message in error, please\nnotify us immediately by replying to the message and deleting it from your\ncomputer. Any views or opinions presented are solely those of the author\[email protected] and do not necessarily represent those of \nthe\ncompany. Although the company has taken reasonable precautions to ensure no\nviruses are present in this email, the company cannot accept responsibility\nfor any loss or damage arising from the use of this email or attachments.\n", "msg_date": "Mon, 21 Sep 2009 16:50:23 +0200", "msg_from": "Vincent de Phily <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query : very simple delete, 100% cpu,\n\tnearly no disk activity" }, { "msg_contents": "On Friday 11 September 2009 23:30:37 Robert Haas wrote:\n> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily\n> <[email protected]> wrote:\n> > On Monday 07 September 2009 03:25:23 Tom Lane wrote:\n> >>\n> >> 99% of the time, the reason a delete takes way longer than it seems like\n> >> it should is trigger firing time.  In particular, foreign key triggers\n> >> where you don't have an index on the referencing column.  Are there\n> >> any foreign keys linking to this table?\n> >\n> > Yes, but they look fine to me (?). Only one FK references the table; it's\n> > an internal reference :\n> >\n(...)\n> I would try EXPLAIN ANALYZE DELETE ... with a query that is modified\n> so as to delete only a handful of rows. That will report the amount\n> of time spent in triggers vs. the main query, which will help you\n> assess whether your conclusion that the foreign keys are OK is\n> correct.\n\nGood idea. I'll try that in a little while and report the result.\n\n-- \nVincent de Phily\nMobile Devices\n+33 (0) 666 301 306\n+33 (0) 142 119 325\n\nWarning\nThis message (and any associated files) is intended only for the use of its\nintended recipient and may contain information that is confidential, subject\nto copyright or constitutes a trade secret. If you are not the intended\nrecipient you are hereby notified that any dissemination, copying or\ndistribution of this message, or files associated with this message, is\nstrictly prohibited. If you have received this message in error, please\nnotify us immediately by replying to the message and deleting it from your\ncomputer. Any views or opinions presented are solely those of the author\[email protected] and do not necessarily represent those of \nthe\ncompany. Although the company has taken reasonable precautions to ensure no\nviruses are present in this email, the company cannot accept responsibility\nfor any loss or damage arising from the use of this email or attachments.\n", "msg_date": "Mon, 21 Sep 2009 16:53:49 +0200", "msg_from": "Vincent de Phily <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query : very simple delete, 100% cpu,\n\tnearly no disk activity" }, { "msg_contents": "On Mon, Sep 21, 2009 at 10:50 AM, Vincent de Phily\n<[email protected]> wrote:\n> On Friday 11 September 2009 23:55:09 Merlin Moncure wrote:\n>> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily\n>> <[email protected]> wrote:\n>> >                                     Table \"public.message\"\n>> >  Column   |            Type             |                      Modifiers\n>> > -----------+-----------------------------+-------------------------------\n>> >----------------------- id        | integer                     | not null\n>> > default\n>> > nextval('message_id_seq'::regclass)\n>> >  unitid    | integer                     | not null\n>> >  userid    | integer                     |\n>> >  refid     | integer                     |\n>> >\n>> > Indexes:\n>> >    \"message_pkey\" PRIMARY KEY, btree (id)\n>> >    \"message_unitid_fromto_status_idx\" btree (unitid, fromto, status)\n>> >    \"message_userid_idx\" btree (userid)\n>> > Foreign-key constraints:\n>> >    \"message_refid_fkey\" FOREIGN KEY (refid) REFERENCES message(id) ON\n>> > UPDATE CASCADE ON DELETE CASCADE\n>> >    \"message_unitid_fkey\" FOREIGN KEY (unitid) REFERENCES units(id) ON\n>> > UPDATE CASCADE ON DELETE CASCADE\n>> >    \"message_userid_fkey\" FOREIGN KEY (userid) REFERENCES users(id) ON\n>> > UPDATE CASCADE ON DELETE CASCADE\n>>\n>> where is the index on refid?\n>\n> It's\n> \"message_pkey\" PRIMARY KEY, btree (id)\n> because\n> (refid) REFERENCES message(id)\n\nYou are thinking about this backwards. Every time you delete a\nmessage, the table has to be scanned for any messages that reference\nthe message being deleted because of the refid constraint (in order to\nsee if any deletions must be cascaded). PostgreSQL creates a backing\nindex for primary keys automatically but not foreign keys...so you\nlikely need to create an index on refid.\n\nmerlin\n", "msg_date": "Mon, 21 Sep 2009 11:00:36 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query : very simple delete, 100% cpu, nearly no\n\tdisk activity" }, { "msg_contents": "On Monday 21 September 2009 17:00:36 Merlin Moncure wrote:\n> On Mon, Sep 21, 2009 at 10:50 AM, Vincent de Phily\n>\n> <[email protected]> wrote:\n> > On Friday 11 September 2009 23:55:09 Merlin Moncure wrote:\n> >> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily\n> >>\n> >> <[email protected]> wrote:\n> >> >                                     Table \"public.message\"\n> >> >  Column   |            Type             |                    \n> >> >  Modifiers\n> >> > -----------+-----------------------------+----------------------------\n> >> >--- ----------------------- id        | integer                     |\n> >> > not null default\n> >> > nextval('message_id_seq'::regclass)\n> >> >  unitid    | integer                     | not null\n> >> >  userid    | integer                     |\n> >> >  refid     | integer                     |\n> >> >\n> >> > Indexes:\n> >> >    \"message_pkey\" PRIMARY KEY, btree (id)\n> >> >    \"message_unitid_fromto_status_idx\" btree (unitid, fromto, status)\n> >> >    \"message_userid_idx\" btree (userid)\n> >> > Foreign-key constraints:\n> >> >    \"message_refid_fkey\" FOREIGN KEY (refid) REFERENCES message(id) ON\n> >> > UPDATE CASCADE ON DELETE CASCADE\n> >> >    \"message_unitid_fkey\" FOREIGN KEY (unitid) REFERENCES units(id) ON\n> >> > UPDATE CASCADE ON DELETE CASCADE\n> >> >    \"message_userid_fkey\" FOREIGN KEY (userid) REFERENCES users(id) ON\n> >> > UPDATE CASCADE ON DELETE CASCADE\n> >>\n> >> where is the index on refid?\n> >\n> > It's\n> > \"message_pkey\" PRIMARY KEY, btree (id)\n> > because\n> > (refid) REFERENCES message(id)\n>\n> You are thinking about this backwards. Every time you delete a\n> message, the table has to be scanned for any messages that reference\n> the message being deleted because of the refid constraint (in order to\n> see if any deletions must be cascaded). PostgreSQL creates a backing\n> index for primary keys automatically but not foreign keys...so you\n> likely need to create an index on refid.\n\nD'Oh ! Sounds obvious now that you mention it, and it's a very good \nexplanation of the delete's slowness.\n\nI'll test this tonight or tomorrow.\n\n\n-- \nVincent de Phily\nMobile Devices\n+33 (0) 666 301 306\n+33 (0) 142 119 325\n\nWarning\nThis message (and any associated files) is intended only for the use of its\nintended recipient and may contain information that is confidential, subject\nto copyright or constitutes a trade secret. If you are not the intended\nrecipient you are hereby notified that any dissemination, copying or\ndistribution of this message, or files associated with this message, is\nstrictly prohibited. If you have received this message in error, please\nnotify us immediately by replying to the message and deleting it from your\ncomputer. Any views or opinions presented are solely those of the author\[email protected] and do not necessarily represent those of \nthe\ncompany. Although the company has taken reasonable precautions to ensure no\nviruses are present in this email, the company cannot accept responsibility\nfor any loss or damage arising from the use of this email or attachments.\n", "msg_date": "Mon, 21 Sep 2009 18:06:50 +0200", "msg_from": "Vincent de Phily <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query : very simple delete, 100% cpu,\n\tnearly no disk activity" } ]
[ { "msg_contents": "Does the planner know how to use indices to optimize these queries?\n\nFor reference, I was having SEVERE performance problems with the\nfollowing comparison in an SQL statement where \"mask\" was an integer:\n\n\"select ... from .... where ...... and (permission & mask = permission)\"\n\nThis resulted in the planner deciding to run a nested loop and\nextraordinarily poor performance.\n\nI can probably recode the application to use a field of type \"bit(32)\"\nand either cast to an integer or have the code do the conversion\ninternally (its just a shift eh?)\n\nThe question is whether the above statement will be reasonably planned\nif \"mask\" is a bit type.\n\n\n-- Karl Denninger", "msg_date": "Fri, 04 Sep 2009 14:15:19 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Planner question - \"bit\" data types" }, { "msg_contents": "Karl,\n\n> For reference, I was having SEVERE performance problems with the\n> following comparison in an SQL statement where \"mask\" was an integer:\n> \n> \"select ... from .... where ...... and (permission & mask = permission)\"\n\nAFAIK, the only way to use an index on these queries is through\nexpression indexes. That's why a lot of folks use INTARRAY instead; it\ncomes with a GIN index type.\n\nIt would probably be possible to create a new index type using GiST or\nGIN which indexed bitstrings automatically, but I don't know that anyone\nhas done it yet.\n\nChanging your integer to a bitstring will not, to my knowledge, improve\nthis.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 04 Sep 2009 15:29:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "On Fri, Sep 4, 2009 at 6:29 PM, Josh Berkus<[email protected]> wrote:\n> Karl,\n>\n>> For reference, I was having SEVERE performance problems with the\n>> following comparison in an SQL statement where \"mask\" was an integer:\n>>\n>> \"select ... from .... where ...... and (permission & mask = permission)\"\n>\n> AFAIK, the only way to use an index on these queries is through\n> expression indexes.  That's why a lot of folks use INTARRAY instead; it\n> comes with a GIN index type.\n>\n> It would probably be possible to create a new index type using GiST or\n> GIN which indexed bitstrings automatically, but I don't know that anyone\n> has done it yet.\n>\n> Changing your integer to a bitstring will not, to my knowledge, improve\n> this.\n\nagreed. also, gist/gin is no free lunch, maintaining these type of\nindexes is fairly expensive.\n\nIf you are only interested in one or a very small number of cases of\n'permission', you can use an expression index to target constant\nvalues:\n\n\"select ... from .... where ...... and (permission & mask = permission)\"\n\ncreate index foo_permission_xyz_idx on foo((64 & mask = 64));\nselect * from foo where 64 & mask = 64; --indexed!\n\nthis optimizes a _particular_ case of permission into a boolean based\nindex. this can be a big win if the # of matching cases is very small\nor you want to use this in a multi-column index.\n\nmerlin\n", "msg_date": "Sat, 5 Sep 2009 16:09:07 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> If you are only interested in one or a very small number of cases of\n> 'permission', you can use an expression index to target constant\n> values:\n\n> \"select ... from .... where ...... and (permission & mask = permission)\"\n\n> create index foo_permission_xyz_idx on foo((64 & mask = 64));\n> select * from foo where 64 & mask = 64; --indexed!\n\nA possibly more useful variant is to treat the permission condition\nas a partial index's WHERE condition. The advantage of that is that\nthe index's actual content can be some other column, so that you can\ncombine the permission check with a second indexable test. The index\nis still available for queries that don't use the other column, but\nit's more useful for those that do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Sep 2009 16:59:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types " }, { "msg_contents": "Tom Lane wrote:\n> Merlin Moncure <[email protected]> writes:\n> \n>> If you are only interested in one or a very small number of cases of\n>> 'permission', you can use an expression index to target constant\n>> values:\n>> \n>\n> \n>> \"select ... from .... where ...... and (permission & mask = permission)\"\n>> \n>\n> \n>> create index foo_permission_xyz_idx on foo((64 & mask = 64));\n>> select * from foo where 64 & mask = 64; --indexed!\n>> \n>\n> A possibly more useful variant is to treat the permission condition\n> as a partial index's WHERE condition. The advantage of that is that\n> the index's actual content can be some other column, so that you can\n> combine the permission check with a second indexable test. The index\n> is still available for queries that don't use the other column, but\n> it's more useful for those that do.\n>\n> \t\t\tregards, tom lane\n>\n> \nThat doesn't help in this case as the returned set will typically be\nquite large, with the condition typically being valid on anywhere from\n10-80% of the returned tuples.\n\nWhat I am trying to avoid is creating a boolean column for EACH\npotential bit (and an index on each), as that makes the schema\nnon-portable for others and quite messy as well - while there are a\nhandful of \"known masks\" the system also has a number of \"user defined\"\nbit positions that vary from installation to installation.\n\n\n-- Karl", "msg_date": "Sat, 05 Sep 2009 16:09:36 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Karl Denninger <[email protected]> writes:\n> That doesn't help in this case as the returned set will typically be\n> quite large, with the condition typically being valid on anywhere from\n> 10-80% of the returned tuples.\n\nIn that case you'd be wasting your time to get it to use an index\nfor the condition anyway. Maybe you need to take a step back and\nlook at the query as a whole rather than focus on this particular\ncondition.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Sep 2009 17:33:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types " }, { "msg_contents": "Tom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n> \n>> That doesn't help in this case as the returned set will typically be\n>> quite large, with the condition typically being valid on anywhere from\n>> 10-80% of the returned tuples.\n>> \n>\n> In that case you'd be wasting your time to get it to use an index\n> for the condition anyway. Maybe you need to take a step back and\n> look at the query as a whole rather than focus on this particular\n> condition.\n>\n> \t\t\tregards, tom lane\n>\n> \nThe query, sans this condition, is extremely fast and contains a LOT of\nother conditions (none of which cause trouble.)\n\nIt is only attempting to filter the returned tuples on the permission\nbit(s) involved that cause trouble.\n\n-- Karl", "msg_date": "Sat, 05 Sep 2009 16:39:45 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Karl Denninger <[email protected]> writes:\n> Tom Lane wrote:\n>> In that case you'd be wasting your time to get it to use an index\n>> for the condition anyway. Maybe you need to take a step back and\n>> look at the query as a whole rather than focus on this particular\n>> condition.\n\n> The query, sans this condition, is extremely fast and contains a LOT of\n> other conditions (none of which cause trouble.)\n> It is only attempting to filter the returned tuples on the permission\n> bit(s) involved that cause trouble.\n\nMy comment stands: asking about how to use an index for this is the\nwrong question.\n\nYou never showed us any EXPLAIN results, but I suspect what is happening\nis that the planner thinks the \"permission & mask = permission\"\ncondition is fairly selective (offhand I think it'd default to\nDEFAULT_EQ_SEL or 0.005) whereas the true selectivity per your prior\ncomment is only 0.1 to 0.8. This is causing it to change to a plan that\nwould be good for a small number of rows, when it should stick to a plan\nthat is good for a large number of rows.\n\nSo the right question is \"how do I fix the bad selectivity estimate?\".\nUnfortunately there's no non-kluge answer. What I think I'd try is\nwrapping the condition into a function, say\n\ncreate function permission_match(perms int, mask int) returns bool\nas $$begin return perms & mask = mask; end$$ language plpgsql\nstrict immutable;\n\nThe planner won't know what to make of \"where permission_match(perms, 64)\"\neither, but the default selectivity estimate for a boolean function\nis 0.333, much closer to what you need.\n\nOr plan B, which I'd recommend, is to forget the mask business and go\nover to a boolean column per permission flag. Then the planner would\nactually have decent statistics about the flag selectivities, and the\nqueries would be a lot more readable too. Your objection that you'd\nneed an index per flag column is misguided --- at these selectivities\nan index is really pointless. And I entirely fail to understand the\ncomplaint about it being unportable; you think \"&\" is more portable than\nboolean? Only one of those things is in the SQL standard.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Sep 2009 19:24:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types " }, { "msg_contents": "Tom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n> \n>> Tom Lane wrote:\n>> \n>>> In that case you'd be wasting your time to get it to use an index\n>>> for the condition anyway. Maybe you need to take a step back and\n>>> look at the query as a whole rather than focus on this particular\n>>> condition.\n>>> \n>> The query, sans this condition, is extremely fast and contains a LOT of\n>> other conditions (none of which cause trouble.)\n>> It is only attempting to filter the returned tuples on the permission\n>> bit(s) involved that cause trouble.\n>> \n>\n> My comment stands: asking about how to use an index for this is the\n> wrong question.\n>\n> You never showed us any EXPLAIN results,\nYes I did. Go back and look at the archives. I provided full EXPLAIN\nand EXPLAIN ANALYZE results for the original query. Sheesh.\n> Or plan B, which I'd recommend, is to forget the mask business and go\n> over to a boolean column per permission flag. Then the planner would\n> actually have decent statistics about the flag selectivities, and the\n> queries would be a lot more readable too. Your objection that you'd\n> need an index per flag column is misguided --- at these selectivities\n> an index is really pointless. And I entirely fail to understand the\n> complaint about it being unportable; you think \"&\" is more portable than\n> boolean? Only one of those things is in the SQL standard.\n>\n> \t\t\tregards, tom lane\n> \nThe point isn't portability to other SQL engines - it is to other\npeople's installations. The bitmask is (since it requires only changing\nthe mask constants in the container file that makes the SQL calls by\nreference) where explicit columns is not by a long shot.\n\nIn any event it looks like that's the only reasonable way to do this, so\nthanks (I think)\n\n-- Karl", "msg_date": "Sat, 05 Sep 2009 18:39:14 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Karl Denninger escribi�:\n> Tom Lane wrote:\n\n> > You never showed us any EXPLAIN results,\n> Yes I did. Go back and look at the archives. I provided full EXPLAIN\n> and EXPLAIN ANALYZE results for the original query. Sheesh.\n\nYou did? Where? This is your first message in this thread:\nhttp://archives.postgresql.org/pgsql-performance/2009-09/msg00059.php\nNo EXPLAINs anywhere to be seen.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sat, 5 Sep 2009 20:15:00 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "There was a previous thread and I referenced it. I don't have the other\none in my email system any more to follow up to it.\n\nI give up; the attack-dog crowd has successfully driven me off. Ciao.\n\nAlvaro Herrera wrote:\n> Karl Denninger escribi�:\n> \n>> Tom Lane wrote:\n>> \n>\n> \n>>> You never showed us any EXPLAIN results,\n>>> \n>> Yes I did. Go back and look at the archives. I provided full EXPLAIN\n>> and EXPLAIN ANALYZE results for the original query. Sheesh.\n>> \n>\n> You did? Where? This is your first message in this thread:\n> http://archives.postgresql.org/pgsql-performance/2009-09/msg00059.php\n> No EXPLAINs anywhere to be seen.\n>\n>", "msg_date": "Sat, 05 Sep 2009 19:19:04 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger<[email protected]> wrote:\n> There was a previous thread and I referenced it. I don't have the other one\n> in my email system any more to follow up to it.\n>\n> I give up; the attack-dog crowd has successfully driven me off.  Ciao.\n\nAnother more standard sql approach is to push the flags out to a\nsubordinate table. This is less efficient of course but now you get\nto use standard join tactics to match conditions...\n\n\nmerlin\n", "msg_date": "Sun, 6 Sep 2009 01:36:24 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: Karl Denninger\n> Enviado el: Sábado, 05 de Septiembre de 2009 21:19\n> Para: Alvaro Herrera\n> CC: Tom Lane; Merlin Moncure; Josh Berkus; \n> [email protected]\n> Asunto: Re: [PERFORM] Planner question - \"bit\" data types\n> \n> There was a previous thread and I referenced it. I don't have \n> the other one in my email system any more to follow up to it.\n> \n> I give up; the attack-dog crowd has successfully driven me off. Ciao.\n> \n> Alvaro Herrera wrote: \n> \n> \tKarl Denninger escribió:\n> \t \n> \n> \t\tTom Lane wrote:\n> \t\t \n> \n> \t\n> \t \n> \n> \t\t\tYou never showed us any EXPLAIN results,\n> \t\t\t \n> \n> \t\tYes I did. Go back and look at the archives. \n> I provided full EXPLAIN\n> \t\tand EXPLAIN ANALYZE results for the original \n> query. Sheesh.\n> \t\t \n> \n> \t\n> \tYou did? Where? This is your first message in this thread:\n> \t\n> http://archives.postgresql.org/pgsql-performance/2009-09/msg00059.php\n> \tNo EXPLAINs anywhere to be seen.\n> \t\n\nI guess this is the post Karl refers to:\n\nhttp://archives.postgresql.org/pgsql-sql/2009-08/msg00088.php\n\nStill you can't hope that others will recall a post 2 weeks ago, with an\nother subject and in an other list!\n\n\n", "msg_date": "Mon, 7 Sep 2009 16:33:53 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger<[email protected]> wrote:\n> There was a previous thread and I referenced it. I don't have the other one\n> in my email system any more to follow up to it.\n>\n> I give up; the attack-dog crowd has successfully driven me off.  Ciao.\n\nPerhaps I'm biased by knowing some of the people involved, but I don't\nthink anyone on this thread has been anything but polite. It would\ncertainly be great if PostgreSQL could properly estimate the\nselectivity of expressions like this without resorting to nasty hacks,\nbut it can't, and unfortunately, there's really no possibility of that\nchanging any time soon. Even if someone implements a fix today, the\nsoonest it will appear in a production release is June 2010. So, any\nsuggestion for improvement is going to be in the form of suggesting\nthat you modify the schema in some way. I know that's not really what\nyou're looking for, but unfortunately it's the best we can do.\n\nAs far as I can tell, it is not correct to say that you referenced the\nprevious thread. I do not see any such reference.\n\n...Robert\n", "msg_date": "Mon, 7 Sep 2009 20:49:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Robert Haas wrote:\n> On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger<[email protected]> wrote:\n> \n>> There was a previous thread and I referenced it. I don't have the other one\n>> in my email system any more to follow up to it.\n>>\n>> I give up; the attack-dog crowd has successfully driven me off. Ciao.\n>> \n>\n> Perhaps I'm biased by knowing some of the people involved, but I don't\n> think anyone on this thread has been anything but polite. It would\n> certainly be great if PostgreSQL could properly estimate the\n> selectivity of expressions like this without resorting to nasty hacks,\n> but it can't, and unfortunately, there's really no possibility of that\n> changing any time soon. Even if someone implements a fix today, the\n> soonest it will appear in a production release is June 2010. So, any\n> suggestion for improvement is going to be in the form of suggesting\n> that you modify the schema in some way. I know that's not really what\n> you're looking for, but unfortunately it's the best we can do.\n>\n> As far as I can tell, it is not correct to say that you referenced the\n> previous thread. I do not see any such reference.\n>\n> ...Robert\n>\n> \nI was asking about modifying the schema.\n\nThe current schema is an integer being used as a bitmask. If the\nplanner knows how to handle a type of \"bit(X)\" (and will at least FILTER\nrather than NESTED LOOP it on a select, as happens for an Integer used\nin this fashion), that change is easier than splitting it into\nindividual boolean fields.\n\n-- Karl", "msg_date": "Mon, 07 Sep 2009 19:51:43 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "On Mon, Sep 7, 2009 at 8:51 PM, Karl Denninger<[email protected]> wrote:\n> Robert Haas wrote:\n>\n> On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger<[email protected]> wrote:\n>\n>\n> There was a previous thread and I referenced it. I don't have the other one\n> in my email system any more to follow up to it.\n>\n> I give up; the attack-dog crowd has successfully driven me off.  Ciao.\n>\n>\n> Perhaps I'm biased by knowing some of the people involved, but I don't\n> think anyone on this thread has been anything but polite. It would\n> certainly be great if PostgreSQL could properly estimate the\n> selectivity of expressions like this without resorting to nasty hacks,\n> but it can't, and unfortunately, there's really no possibility of that\n> changing any time soon. Even if someone implements a fix today, the\n> soonest it will appear in a production release is June 2010. So, any\n> suggestion for improvement is going to be in the form of suggesting\n> that you modify the schema in some way. I know that's not really what\n> you're looking for, but unfortunately it's the best we can do.\n>\n> As far as I can tell, it is not correct to say that you referenced the\n> previous thread. I do not see any such reference.\n>\n> ...Robert\n>\n>\n>\n> I was asking about modifying the schema.\n>\n> The current schema is an integer being used as a bitmask.  If the planner\n> knows how to handle a type of \"bit(X)\" (and will at least FILTER rather than\n> NESTED LOOP it on a select, as happens for an Integer used in this fashion),\n> that change is easier than splitting it into individual boolean fields.\n\nWell, the first several replies seem to address that question - I\nthink we all agree that won't help. I'm not sure what you mean by \"at\nleast FILTER rather than NESTED LOOP it on a select\". However,\ntypically, the time when you get a nested loop is when the planner\nbelieves that the loop will be executed very few times (in other\nwords, the outer side will return very few rows). It probably isn't\nthe case that the planner COULDN'T choose to execute the query in some\nother way; rather, the planner believes that the nested loop is faster\nbecause of a (mistaken) belief about how many rows the\nbitmap-criterion will actually match. All the suggestions you've\ngotten upthread are tricks to enable the planner to make a better\nestimate, which will hopefully cause it to choose a better plan.\n\nAs a general statement, selectivity estimation problems are very\npainful to work around and often involve substantial application\nredesign. In all honesty, I think you've run across one of the easier\nvariants. As painful as it is to hear the word easy applied to a\nproblem that's killing you, there actually IS a good solution to this\nproblem: use individual boolean fields. I know that's not what you\nwant to do, but it's better than \"sorry, you're hosed, no matter how\nyou do this it ain't gonna work\". And I do think there are a few in\nthe archives that fall into that category.\n\nGood luck, and sorry for the bad news.\n\n...Robert\n", "msg_date": "Mon, 7 Sep 2009 21:12:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Robert Haas wrote:\n> On Mon, Sep 7, 2009 at 8:51 PM, Karl Denninger<[email protected]> wrote:\n> \n>> Robert Haas wrote:\n>>\n>> On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger<[email protected]> wrote:\n>>\n>>\n>> There was a previous thread and I referenced it. I don't have the other one\n>> in my email system any more to follow up to it.\n>>\n>> I give up; the attack-dog crowd has successfully driven me off. Ciao.\n>>\n>>\n>> Perhaps I'm biased by knowing some of the people involved, but I don't\n>> think anyone on this thread has been anything but polite. It would\n>> certainly be great if PostgreSQL could properly estimate the\n>> selectivity of expressions like this without resorting to nasty hacks,\n>> but it can't, and unfortunately, there's really no possibility of that\n>> changing any time soon. Even if someone implements a fix today, the\n>> soonest it will appear in a production release is June 2010. So, any\n>> suggestion for improvement is going to be in the form of suggesting\n>> that you modify the schema in some way. I know that's not really what\n>> you're looking for, but unfortunately it's the best we can do.\n>>\n>> As far as I can tell, it is not correct to say that you referenced the\n>> previous thread. I do not see any such reference.\n>>\n>> ...Robert\n>>\n>>\n>>\n>> I was asking about modifying the schema.\n>>\n>> The current schema is an integer being used as a bitmask. If the planner\n>> knows how to handle a type of \"bit(X)\" (and will at least FILTER rather than\n>> NESTED LOOP it on a select, as happens for an Integer used in this fashion),\n>> that change is easier than splitting it into individual boolean fields.\n>> \n>\n> Well, the first several replies seem to address that question - I\n> think we all agree that won't help. I'm not sure what you mean by \"at\n> least FILTER rather than NESTED LOOP it on a select\". However,\n> typically, the time when you get a nested loop is when the planner\n> believes that the loop will be executed very few times (in other\n> words, the outer side will return very few rows). It probably isn't\n> the case that the planner COULDN'T choose to execute the query in some\n> other way; rather, the planner believes that the nested loop is faster\n> because of a (mistaken) belief about how many rows the\n> bitmap-criterion will actually match. All the suggestions you've\n> gotten upthread are tricks to enable the planner to make a better\n> estimate, which will hopefully cause it to choose a better plan.\n>\n> As a general statement, selectivity estimation problems are very\n> painful to work around and often involve substantial application\n> redesign. In all honesty, I think you've run across one of the easier\n> variants. As painful as it is to hear the word easy applied to a\n> problem that's killing you, there actually IS a good solution to this\n> problem: use individual boolean fields. I know that's not what you\n> want to do, but it's better than \"sorry, you're hosed, no matter how\n> you do this it ain't gonna work\". And I do think there are a few in\n> the archives that fall into that category.\n>\n> Good luck, and sorry for the bad news.\n>\n> ...Robert\n> \nThe individual boolean fields don't kill me and in terms of some of the\napplication issues they're actually rather easy to code for.\n\nThe problem with re-coding for them is extensibility (by those who\ninstall and administer the package); a mask leaves open lots of extra\nbits for \"site-specific\" use, where hard-coding booleans does not, and\nsince the executable is a binary it instantly becomes a huge problem for\neveryone but me.\n\nIt does appear, however, that a bitfield doesn't evaluate any\ndifferently than does an integer used with a mask, so there you have\nit..... it is what it is, and if I want this sort of selectivity in the\nsearch I have no choice.\n\n-- Karl", "msg_date": "Mon, 07 Sep 2009 21:05:59 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Karl Denninger escribi�:\n\n> The individual boolean fields don't kill me and in terms of some of the\n> application issues they're actually rather easy to code for.\n> \n> The problem with re-coding for them is extensibility (by those who\n> install and administer the package); a mask leaves open lots of extra\n> bits for \"site-specific\" use, where hard-coding booleans does not, and\n> since the executable is a binary it instantly becomes a huge problem for\n> everyone but me.\n\nDid you try hiding the bitmask operations inside a function as Tom\nsuggested?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 7 Sep 2009 22:22:47 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "On Mon, Sep 7, 2009 at 10:05 PM, Karl Denninger<[email protected]> wrote:\n> The individual boolean fields don't kill me and in terms of some of the\n> application issues they're actually rather easy to code for.\n>\n> The problem with re-coding for them is extensibility (by those who install\n> and administer the package); a mask leaves open lots of extra bits for\n> \"site-specific\" use, where hard-coding booleans does not, and since the\n> executable is a binary it instantly becomes a huge problem for everyone but\n> me.\n>\n> It does appear, however, that a bitfield doesn't evaluate any differently\n> than does an integer used with a mask, so there you have it..... it is what\n> it is, and if I want this sort of selectivity in the search I have no\n> choice.\n\nYou can always create 32 boolean fields and only use some of them,\nleaving the others for site-specific use...\n\n...Robert\n", "msg_date": "Mon, 7 Sep 2009 22:54:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Sep 7, 2009 at 10:05 PM, Karl Denninger<[email protected]> wrote:\n>> The problem with re-coding for them is extensibility (by those who install\n>> and administer the package); a mask leaves open lots of extra bits for\n>> \"site-specific\" use, where hard-coding booleans does not,\n\n> You can always create 32 boolean fields and only use some of them,\n> leaving the others for site-specific use...\n\nIndeed. Why is \"user_defined_flag_24\" so much worse that \"mask &\n16777216\" ? Especially when the day comes that you need to add one more\nsystem-defined flag bit?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Sep 2009 23:36:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types " }, { "msg_contents": "Alvaro Herrera wrote:\n> Karl Denninger escribi?:\n> \n> > The individual boolean fields don't kill me and in terms of some of the\n> > application issues they're actually rather easy to code for.\n> > \n> > The problem with re-coding for them is extensibility (by those who\n> > install and administer the package); a mask leaves open lots of extra\n> > bits for \"site-specific\" use, where hard-coding booleans does not, and\n> > since the executable is a binary it instantly becomes a huge problem for\n> > everyone but me.\n> \n> Did you try hiding the bitmask operations inside a function as Tom\n> suggested?\n\nYes. In addition, functions that are part of expression indexes do get\ntheir own optimizer statistics, so it does allow you to get optimizer\nstats for your test without having to use booleans.\n\nI see this documented in the 8.0 release notes:\n\n * \"ANALYZE\" now collects statistics for expression indexes (Tom)\n Expression indexes (also called functional indexes) allow users\n to index not just columns but the results of expressions and\n function calls. With this release, the optimizer can gather and\n use statistics about the contents of expression indexes. This will\n greatly improve the quality of planning for queries in which an\n expression index is relevant.\n\nIs this in our main documentation somewhere?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 17 Sep 2009 22:29:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Bruce Momjian wrote:\n> Alvaro Herrera wrote:\n> \n>> Karl Denninger escribi?:\n>>\n>> \n>>> The individual boolean fields don't kill me and in terms of some of the\n>>> application issues they're actually rather easy to code for.\n>>>\n>>> The problem with re-coding for them is extensibility (by those who\n>>> install and administer the package); a mask leaves open lots of extra\n>>> bits for \"site-specific\" use, where hard-coding booleans does not, and\n>>> since the executable is a binary it instantly becomes a huge problem for\n>>> everyone but me.\n>>> \n>> Did you try hiding the bitmask operations inside a function as Tom\n>> suggested?\n>> \n>\n> Yes. In addition, functions that are part of expression indexes do get\n> their own optimizer statistics, so it does allow you to get optimizer\n> stats for your test without having to use booleans.\n>\n> I see this documented in the 8.0 release notes:\n>\n> * \"ANALYZE\" now collects statistics for expression indexes (Tom)\n> Expression indexes (also called functional indexes) allow users\n> to index not just columns but the results of expressions and\n> function calls. With this release, the optimizer can gather and\n> use statistics about the contents of expression indexes. This will\n> greatly improve the quality of planning for queries in which an\n> expression index is relevant.\n>\n> Is this in our main documentation somewhere?\n>\n> \nInteresting... declaring this:\n\ncreate function ispermitted(text, integer) returns boolean as $$\nselect permission & $2 = permission from forum where forum.name=$1;\n$$ Language SQL STABLE;\n\nthen calling it with \"ispermitted(post.forum, '4')\" as one of the terms\ncauses the query optimizer to treat it as a FILTER instead of a nested\nloop, and it works as expected.\n\nHowever, I don't think I can index that - right - since there are two\nvariables involved which are not part of the table being indexed.....\n\n-- Karl", "msg_date": "Thu, 17 Sep 2009 22:10:07 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Karl Denninger wrote:\n> > Yes. In addition, functions that are part of expression indexes do get\n> > their own optimizer statistics, so it does allow you to get optimizer\n> > stats for your test without having to use booleans.\n> >\n> > I see this documented in the 8.0 release notes:\n> >\n> > * \"ANALYZE\" now collects statistics for expression indexes (Tom)\n> > Expression indexes (also called functional indexes) allow users\n> > to index not just columns but the results of expressions and\n> > function calls. With this release, the optimizer can gather and\n> > use statistics about the contents of expression indexes. This will\n> > greatly improve the quality of planning for queries in which an\n> > expression index is relevant.\n> >\n> > Is this in our main documentation somewhere?\n> >\n> > \n> Interesting... declaring this:\n> \n> create function ispermitted(text, integer) returns boolean as $$\n> select permission & $2 = permission from forum where forum.name=$1;\n> $$ Language SQL STABLE;\n> \n> then calling it with \"ispermitted(post.forum, '4')\" as one of the terms\n> causes the query optimizer to treat it as a FILTER instead of a nested\n> loop, and it works as expected.\n> \n> However, I don't think I can index that - right - since there are two\n> variables involved which are not part of the table being indexed.....\n\nThat should index fine. It is an _expression_ index so it can be pretty\ncomplicated.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 17 Sep 2009 23:13:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Bruce Momjian wrote:\n> > Interesting... declaring this:\n> > \n> > create function ispermitted(text, integer) returns boolean as $$\n> > select permission & $2 = permission from forum where forum.name=$1;\n> > $$ Language SQL STABLE;\n> > \n> > then calling it with \"ispermitted(post.forum, '4')\" as one of the terms\n> > causes the query optimizer to treat it as a FILTER instead of a nested\n> > loop, and it works as expected.\n> > \n> > However, I don't think I can index that - right - since there are two\n> > variables involved which are not part of the table being indexed.....\n> \n> That should index fine. It is an _expression_ index so it can be pretty\n> complicated.\n\nOh, you have to use the exact same syntax in there WHERE clause for the\nexpression index to be used, then use EXPLAIN to see if the index is\nused.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 17 Sep 2009 23:19:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Bruce Momjian wrote:\n> Karl Denninger wrote:\n> \n>>> Yes. In addition, functions that are part of expression indexes do get\n>>> their own optimizer statistics, so it does allow you to get optimizer\n>>> stats for your test without having to use booleans.\n>>>\n>>> I see this documented in the 8.0 release notes:\n>>>\n>>> * \"ANALYZE\" now collects statistics for expression indexes (Tom)\n>>> Expression indexes (also called functional indexes) allow users\n>>> to index not just columns but the results of expressions and\n>>> function calls. With this release, the optimizer can gather and\n>>> use statistics about the contents of expression indexes. This will\n>>> greatly improve the quality of planning for queries in which an\n>>> expression index is relevant.\n>>>\n>>> Is this in our main documentation somewhere?\n>>>\n>>> \n>>> \n>> Interesting... declaring this:\n>>\n>> create function ispermitted(text, integer) returns boolean as $$\n>> select permission & $2 = permission from forum where forum.name=$1;\n>> $$ Language SQL STABLE;\n>>\n>> then calling it with \"ispermitted(post.forum, '4')\" as one of the terms\n>> causes the query optimizer to treat it as a FILTER instead of a nested\n>> loop, and it works as expected.\n>>\n>> However, I don't think I can index that - right - since there are two\n>> variables involved which are not part of the table being indexed.....\n>> \n>\n> That should index fine. It is an _expression_ index so it can be pretty\n> complicated\nIt does not appear I can create an index on that (not that it appears to\nbe necessary for decent performance)\n\ncreate index forum_ispermitted on forum using btree(ispermitted(name,\npermission));\nERROR: functions in index expression must be marked IMMUTABLE\nticker=#\n\nThe function is of course of class STATIC.\n\n-- Karl", "msg_date": "Thu, 17 Sep 2009 22:49:28 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "Bruce Momjian wrote:\n> Alvaro Herrera wrote:\n> > Karl Denninger escribi?:\n> > \n> > > The individual boolean fields don't kill me and in terms of some of the\n> > > application issues they're actually rather easy to code for.\n> > > \n> > > The problem with re-coding for them is extensibility (by those who\n> > > install and administer the package); a mask leaves open lots of extra\n> > > bits for \"site-specific\" use, where hard-coding booleans does not, and\n> > > since the executable is a binary it instantly becomes a huge problem for\n> > > everyone but me.\n> > \n> > Did you try hiding the bitmask operations inside a function as Tom\n> > suggested?\n> \n> Yes. In addition, functions that are part of expression indexes do get\n> their own optimizer statistics, so it does allow you to get optimizer\n> stats for your test without having to use booleans.\n> \n> I see this documented in the 8.0 release notes:\n> \n> * \"ANALYZE\" now collects statistics for expression indexes (Tom)\n> Expression indexes (also called functional indexes) allow users\n> to index not just columns but the results of expressions and\n> function calls. With this release, the optimizer can gather and\n> use statistics about the contents of expression indexes. This will\n> greatly improve the quality of planning for queries in which an\n> expression index is relevant.\n> \n> Is this in our main documentation somewhere?\n\nAdded with attached, applied patch.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Mon, 22 Feb 2010 21:47:29 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" }, { "msg_contents": "On Sep 7, 2009, at 7:05 PM, Karl Denninger wrote:\n\nThe individual boolean fields don't kill me and in terms of some of the application issues they're actually rather easy to code for.\n\nThe problem with re-coding for them is extensibility (by those who install and administer the package); a mask leaves open lots of extra bits for \"site-specific\" use, where hard-coding booleans does not, and since the executable is a binary it instantly becomes a huge problem for everyone but me.\n\nIt does appear, however, that a bitfield doesn't evaluate any differently than does an integer used with a mask, so there you have it..... it is what it is, and if I want this sort of selectivity in the search I have no choice.\n\nPerhaps, use a view to encapsulate the extensible bit fields? Then custom installations just modify the view? I haven't thought through that too far, but it might work.\n\n\n-- Karl\n<karl.vcf>\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\nOn Sep 7, 2009, at 7:05 PM, Karl Denninger wrote:\n\nThe individual boolean fields don't kill me and in terms of some of the\napplication issues they're actually rather easy to code for.\n\nThe problem with re-coding for them is extensibility (by those who\ninstall and administer the package); a mask leaves open lots of extra\nbits for \"site-specific\" use, where hard-coding booleans does not, and\nsince the executable is a binary it instantly becomes a huge problem\nfor everyone but me.\n\nIt does appear, however, that a bitfield doesn't evaluate any\ndifferently than does an integer used with a mask, so there you have\nit..... it is what it is, and if I want this sort of selectivity in the\nsearch I have no choice.Perhaps, use a view to encapsulate the extensible bit fields?  Then custom installations just modify the view?  I haven't thought through that too far, but it might work.\n\n-- Karl\n\n<karl.vcf>-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 23 Feb 2010 15:51:43 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - \"bit\" data types" } ]
[ { "msg_contents": "Hi All,\n\nI compile PostgreSQL-8.4.0 with icc and --enable profiling option. I \nran command psql and create table and make a select then I quit psql \nand go to .../data/gprof folder there are some folders named with \nnumbers (I think they are query ids); all of them are empty. How can I \nsolve this issue?\n\nReydan\n", "msg_date": "Mon, 7 Sep 2009 18:11:14 +0300", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Using Gprof with Postgresql" }, { "msg_contents": "postgresql was faster than the files ;)\n\n(sorry, I just couldn't resist).\n", "msg_date": "Mon, 7 Sep 2009 16:32:28 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Gprof with Postgresql" }, { "msg_contents": "On Mon, Sep 7, 2009 at 8:11 AM, Reydan Cankur <[email protected]>wrote:\n\n> Hi All,\n>\n> I compile PostgreSQL-8.4.0 with icc and --enable profiling option. I ran\n> command psql and create table and make a select then I quit psql and go to\n> .../data/gprof folder there are some folders named with numbers (I think\n> they are query ids);\n\n\n\nThey are the process ids (PIDs) of the backend processes.\n\n\n\n> all of them are empty. How can I solve this issue?\n>\n\n\nDoes your compiler work for profiling in general? Can you compile other\nsimpler programs for profiling with icc and have it work for them? If so,\nhow?\n\nI thought gprof was specific to GNU compilers.\n\nJeff\n\nOn Mon, Sep 7, 2009 at 8:11 AM, Reydan Cankur <[email protected]> wrote:\nHi All,\n\nI compile PostgreSQL-8.4.0 with icc and --enable profiling option. I ran command psql and create table and make a select then I quit psql and go to .../data/gprof folder there are some folders named with numbers (I think they are query ids);\nThey are the process ids (PIDs) of the backend processes.  all of them are empty. How can I solve this issue?\nDoes your compiler work for profiling in general?  Can you compile other simpler programs for profiling with icc and have it work for them?  If so, how?I thought gprof was specific to GNU compilers.\n Jeff", "msg_date": "Mon, 7 Sep 2009 10:18:31 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Gprof with Postgresql" }, { "msg_contents": "Reydan Cankur <[email protected]> writes:\n> I compile PostgreSQL-8.4.0 with icc and --enable profiling option. I \n> ran command psql and create table and make a select then I quit psql \n> and go to .../data/gprof folder there are some folders named with \n> numbers (I think they are query ids); all of them are empty. How can I \n> solve this issue?\n\nWell, you could use gcc ... icc claims to support the -pg switch but\nthe above sounds like it just ignores it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Sep 2009 13:24:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Gprof with Postgresql " }, { "msg_contents": "I just compiled it with gcc and produces the gmon.out file for every \nprocess; by the way I am running below script in order to produce \nreadable .out files\n\n gprof .../pgsql/bin/postgres gmon.out > createtable2.out\n\nis postgres the right executable?\n\nregards\nreydan\n\nOn Sep 7, 2009, at 8:24 PM, Tom Lane wrote:\n>\n> Well, you could use gcc ... icc claims to support the -pg switch but\n> the above sounds like it just ignores it.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Tue, 8 Sep 2009 15:31:53 +0300", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using Gprof with Postgresql " }, { "msg_contents": "\n\n>\n> I just compiled it with gcc and produces the gmon.out file for every \n> process; by the way I am running below script in order to produce \n> readable .out files\n>\n> gprof .../pgsql/bin/postgres gmon.out > createtable2.out\n>\n> is postgres the right executable?\n>\n> regards\n> reydan\n>\n> On Sep 7, 2009, at 8:24 PM, Tom Lane wrote:\n>>\n>> Well, you could use gcc ... icc claims to support the -pg switch but\n>> the above sounds like it just ignores it.\n>>\n>> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Tue, 8 Sep 2009 15:35:35 +0300", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using Gprof with Postgresql " }, { "msg_contents": "> I just compiled it with gcc and produces the gmon.out file for every \n> process; by the way I am running below script in order to produce \n> readable .out files\n>\n> gprof .../pgsql/bin/postgres gmon.out > createtable2.out\n>\n> is postgres the right executable?\n>\n> regards\n> reydan\n\n\tOff topic, but hace you tried oprofile ? It's excellent...\n", "msg_date": "Tue, 08 Sep 2009 14:44:51 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Gprof with Postgresql" }, { "msg_contents": "Pierre Fr�d�ric Caillaud wrote:\n>> I just compiled it with gcc and produces the gmon.out file for every \n>> process; by the way I am running below script in order to produce \n>> readable .out files\n>>\n>> gprof .../pgsql/bin/postgres gmon.out > createtable2.out\n>>\n>> is postgres the right executable?\n>>\n>> regards\n>> reydan\n> \n> Off topic, but hace you tried oprofile ? It's excellent...\n\nI find valgrind to be an excellent profiling tool. It has the advantage that it runs on an unmodified executable (using a virtual machine). You can compile postgres the regular way, start the system up, and then create a short shell script called \"postgres\" that you put in place of the original executable that invokes valgrind on the original executable. Then when postgres starts up your backend, you have just one valgrind process running, rather than the whole Postgres system.\n\nValgrind does 100% tracing of the program rather than statistical sampling, and since it runs in a pure virtual machine, it can detect almost all memory corruption and leaks.\n\nThe big disadvantage of valgrind is that it slows the process WAY down, like by a factor of 5-10 on CPU. For a pure CPU process, it doesn't screw up your stats, but if a process is mixed CPU and I/O, the CPU will appear to dominate.\n\nCraig\n\n", "msg_date": "Tue, 08 Sep 2009 07:30:12 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using Gprof with Postgresql" } ]
[ { "msg_contents": "Hello,\n\ncould you help me with joined query from partitioned table, please? I \nhave a table \"data\" with partitions by period_id\n\nCREATE TABLE data\n(\n period_id smallint NOT NULL DEFAULT 0,\n store_id smallint NOT NULL DEFAULT 0,\n product_id integer NOT NULL DEFAULT 0,\n s_pcs real NOT NULL DEFAULT 0,\n s_val real NOT NULL DEFAULT 0\n)\n\nCONSTRAINT data_561_period_id_check CHECK (period_id = 561)\nCONSTRAINT data_562_period_id_check CHECK (period_id = 562)\n...\n\nWhen I run a simple query with a condition period_id = something I get \nbest query plan:\n\nexplain select sum(s_pcs),sum(s_val)\nfrom data d inner join periods p on d.period_id=p.period_id\n where p.period_id=694;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Aggregate (cost=214028.71..214028.72 rows=1 width=8)\n -> Nested Loop (cost=0.00..181511.71 rows=6503400 width=8)\n -> Index Scan using pk_periods on periods p (cost=0.00..8.27 \nrows=1 width=2)\n Index Cond: (period_id = 694)\n -> Append (cost=0.00..116469.44 rows=6503400 width=10)\n -> Seq Scan on data_694 d (cost=0.00..116446.44 \nrows=6503395 width=10)\n Filter: (d.period_id = 694)\n(8 rows)\n\n\nbut when I try make a condition by join table, the query plan is not \noptimal:\n\n\nselect period_id from periods where y=2009 and w=14;\n period_id\n-----------\n 704\n(1 row)\n\n\nexplain select sum(s_pcs),sum(s_val)\nfrom data d inner join periods p on d.period_id=p.period_id\nwhere p.y=2009 and p.w=14; \n QUERY PLAN\n----------------------------------------------------------------------------------------\n Aggregate (cost=15313300.27..15313300.28 rows=1 width=8)\n -> Hash Join (cost=8.92..15293392.89 rows=3981476 width=8)\n Hash Cond: (d.period_id = p.period_id)\n -> Append (cost=0.00..12267462.15 rows=796295215 width=10)\n -> Seq Scan on data d (cost=0.00..20.40 rows=1040 width=10)\n -> Seq Scan on data_561 d (cost=0.00..66903.25 \nrows=4342825 width=10)\n -> Seq Scan on data_562 d (cost=0.00..73481.02 \nrows=4769802 width=10)\n -> Seq Scan on data_563 d (cost=0.00..73710.95 \nrows=4784695 width=10)\n -> Seq Scan on data_564 d (cost=0.00..71869.75 \nrows=4665175 width=10)\n -> Seq Scan on data_565 d (cost=0.00..72850.37 \nrows=4728837 width=10)\n ...\n\n\nI get same result with constraint_exclusion = partition and \nconstraint_exclusion = on.\nDo you have any idea where can be a problem?\n\nFor simple query the partitions works perfect on this table (about \n2*10^9 records) but the joined query is an problem.\n\n\nThank you very much, Vrata\n\n\n\n\n\n\nHello,\n\ncould you help me with joined query from partitioned table, please? I\nhave a table \"data\" with partitions  by period_id\n\nCREATE TABLE data\n(\n  period_id smallint NOT NULL DEFAULT 0,\n  store_id smallint NOT NULL DEFAULT 0,\n  product_id integer NOT NULL DEFAULT 0,\n  s_pcs real NOT NULL DEFAULT 0,\n  s_val real NOT NULL DEFAULT 0\n)\n\nCONSTRAINT data_561_period_id_check CHECK (period_id = 561)\nCONSTRAINT data_562_period_id_check CHECK (period_id = 562)\n...\n\nWhen I run a simple query with a condition period_id = something I get\nbest query plan:\n\nexplain select sum(s_pcs),sum(s_val)\nfrom data d inner join periods p on d.period_id=p.period_id\n where p.period_id=694;\n                                       QUERY PLAN\n----------------------------------------------------------------------------------------\n Aggregate  (cost=214028.71..214028.72 rows=1 width=8)\n   ->  Nested Loop  (cost=0.00..181511.71 rows=6503400 width=8)\n         ->  Index Scan using pk_periods on periods p \n(cost=0.00..8.27 rows=1 width=2)\n               Index Cond: (period_id = 694)\n         ->  Append  (cost=0.00..116469.44 rows=6503400 width=10)\n                     ->  Seq Scan on data_694 d \n(cost=0.00..116446.44 rows=6503395 width=10)\n                     Filter: (d.period_id = 694)\n(8 rows)\n\n\nbut when I try make a condition by join table, the query plan is not\noptimal:\n\n\nselect period_id from periods where y=2009 and w=14;\n period_id\n-----------\n       704\n(1 row)\n\n\nexplain select sum(s_pcs),sum(s_val) \nfrom data d inner join periods p on d.period_id=p.period_id\nwhere p.y=2009 and p.w=14;                                 \n                                       QUERY PLAN\n----------------------------------------------------------------------------------------\n Aggregate  (cost=15313300.27..15313300.28 rows=1 width=8)\n   ->  Hash Join  (cost=8.92..15293392.89 rows=3981476 width=8)\n         Hash Cond: (d.period_id = p.period_id)\n         ->  Append  (cost=0.00..12267462.15 rows=796295215 width=10)\n               ->  Seq Scan on data d  (cost=0.00..20.40 rows=1040\nwidth=10)\n               ->  Seq Scan on data_561 d  (cost=0.00..66903.25\nrows=4342825 width=10)\n               ->  Seq Scan on data_562 d  (cost=0.00..73481.02\nrows=4769802 width=10)\n               ->  Seq Scan on data_563 d  (cost=0.00..73710.95\nrows=4784695 width=10)\n               ->  Seq Scan on data_564 d  (cost=0.00..71869.75\nrows=4665175 width=10)\n               ->  Seq Scan on data_565 d  (cost=0.00..72850.37\nrows=4728837 width=10)\n   ...\n\n\nI get same result with constraint_exclusion = partition and\nconstraint_exclusion = on.\nDo you have any idea where can be a problem?\n\nFor simple query the partitions works perfect on this table (about\n2*10^9 records) but the joined query is an problem.\n\n\nThank you very much, Vrata", "msg_date": "Mon, 07 Sep 2009 17:39:12 +0200", "msg_from": "Vratislav Benes <[email protected]>", "msg_from_op": true, "msg_subject": "PSQL 8.4 - partittions - join tables - not optimal plan" }, { "msg_contents": "In response to Vratislav Benes :\n> but when I try make a condition by join table, the query plan is not optimal:\n> \n> \n> select period_id from periods where y=2009 and w=14;\n> period_id\n> -----------\n> 704\n> (1 row)\n> \n> \n> explain select sum(s_pcs),sum(s_val)\n> from data d inner join periods p on d.period_id=p.period_id\n> where p.y=2009 and p.w=14; \n\nHow about\n\nselect sum(s_pcs),sum(s_val)\nfrom data d inner join periods p on d.period_id=p.period_id\nwhere p.y=2009 and p.w=14\nand p.period_id in (select period_id from periods where y=2009 and w=14);\n\nUntested.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n", "msg_date": "Fri, 11 Sep 2009 07:33:08 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PSQL 8.4 - partittions - join tables - not optimal plan" } ]
[ { "msg_contents": "\"[email protected]\" wrote:\n> On Sat, Sep 5, 2009 at 8:19 PM, Karl Denninger<[email protected]> wrote:\n> > There was a previous thread and I referenced it. I don't have the \n> other one\n> > in my email system any more to follow up to it.\n> >\n> > I give up; the attack-dog crowd has successfully driven me off.� Ciao.\n> \n> Perhaps I'm biased by knowing some of the people involved, but I don't\n> think anyone on this thread has been anything but polite.\nI use several online forums and this -- hands down -- is the best: not \nonly for politeness even when the information I provided was misleading \nor the question I asked was, in retrospect, Duh? but also for 1) speed \nof response, 2) breadth of ideas and 3) accuracy of information -- often \non complex issues with no simple solution from folk who probably have \nmore to do than sit around waiting for the next post. My thanks to the \nknowledgeable people on this forum.\n\nBrian\n\n", "msg_date": "Mon, 07 Sep 2009 18:22:37 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner question - \"bit\" data types" } ]
[ { "msg_contents": "Hello,\n\nI have a following query (autogenerated by Django)\n\nSELECT activity_activityevent.id, activity_activityevent.user_id, activity_activityevent.added_on\nFROM activity_activityevent \nWHERE activity_activityevent.user_id IN (\n SELECT U0.user_id \n FROM profile U0 \n INNER JOIN profile_friends U1 \n ON U0.user_id = U1.to_profile_id\n WHERE U1.from_profile_id = 5\n) \nORDER BY activity_activityevent.added_on DESC LIMIT 10\n\n\nWhen I run EXPLAIN ANALYZE with my default settings (seq scan is on,\nrandom_page_cost = 4) I get the following result:\n\nLimit (cost=4815.62..4815.65 rows=10 width=202) (actual time=332.938..332.977 rows=10 loops=1)\n -> Sort (cost=4815.62..4816.35 rows=292 width=202) (actual time=332.931..332.945 rows=10 loops=1)\n Sort Key: activity_activityevent.added_on\n Sort Method: top-N heapsort Memory: 19kB\n -> Hash IN Join (cost=2204.80..4809.31 rows=292 width=202) (actual time=12.856..283.916 rows=15702 loops=1)\n Hash Cond: (activity_activityevent.user_id = u0.user_id)\n -> Seq Scan on activity_activityevent (cost=0.00..2370.43 rows=61643 width=202) (actual time=0.020..126.129 rows=61643 loops=1)\n -> Hash (cost=2200.05..2200.05 rows=380 width=8) (actual time=12.777..12.777 rows=424 loops=1)\n -> Nested Loop (cost=11.20..2200.05 rows=380 width=8) (actual time=0.260..11.594 rows=424 loops=1)\n -> Bitmap Heap Scan on profile_friends u1 (cost=11.20..62.95 rows=380 width=4) (actual time=0.228..1.202 rows=424 loops=1)\n Recheck Cond: (from_profile_id = 5)\n -> Bitmap Index Scan on profile_friends_from_profile_id_key (cost=0.00..11.10 rows=380 width=0) (actual time=0.208..0.208 rows=424 loops=1)\n Index Cond: (from_profile_id = 5)\n -> Index Scan using profile_pkey on profile u0 (cost=0.00..5.61 rows=1 width=4) (actual time=0.012..0.015 rows=1 loops=424)\n Index Cond: (u0.user_id = u1.to_profile_id)\nTotal runtime: 333.190 ms\n\nBut when I disable seq scan or set random_page_cost to 1.2 (higher\nvalues doesn't change the plan), postgres starts using index and query\nruns two times faster:\n\nLimit (cost=9528.36..9528.38 rows=10 width=202) (actual time=165.047..165.090 rows=10 loops=1)\n -> Sort (cost=9528.36..9529.09 rows=292 width=202) (actual time=165.042..165.058 rows=10 loops=1)\n Sort Key: activity_activityevent.added_on\n Sort Method: top-N heapsort Memory: 19kB\n -> Nested Loop (cost=2201.00..9522.05 rows=292 width=202) (actual time=13.074..126.209 rows=15702 loops=1)\n -> HashAggregate (cost=2201.00..2204.80 rows=380 width=8) (actual time=12.996..14.131 rows=424 loops=1)\n -> Nested Loop (cost=11.20..2200.05 rows=380 width=8) (actual time=0.263..11.665 rows=424 loops=1)\n -> Bitmap Heap Scan on profile_friends u1 (cost=11.20..62.95 rows=380 width=4) (actual time=0.232..1.181 rows=424 loops=1)\n Recheck Cond: (from_profile_id = 5)\n -> Bitmap Index Scan on profile_friends_from_profile_id_key (cost=0.00..11.10 rows=380 width=0) (actual time=0.210..0.210 rows=424 loops=1)\n Index Cond: (from_profile_id = 5)\n -> Index Scan using profile_pkey on profile u0 (cost=0.00..5.61 rows=1 width=4) (actual time=0.013..0.016 rows=1 loops=424)\n Index Cond: (u0.user_id = u1.to_profile_id)\n -> Index Scan using activity_activityevent_user_id on activity_activityevent (cost=0.00..18.82 rows=35 width=202) (actual time=0.014..0.130 rows=37 loops=424)\n Index Cond: (activity_activityevent.user_id = u0.user_id)\nTotal runtime: 165.323 ms\n\n\nCan anyone enlighten me? Should I set random_page_cost to 1.2\npermanently (I feel this is not a really good idea in my case)?\n\nEugene\n\n", "msg_date": "Tue, 08 Sep 2009 18:12:21 +0400", "msg_from": "Eugene Morozov <[email protected]>", "msg_from_op": true, "msg_subject": "Forcing postgresql to use an index" }, { "msg_contents": "Learn it to not generate with \"WITH IN (subq)\", is this can be quite\nslow on postgresql. Use joins instead.\n\nlooks like planner was wrong about rowcount in one place: Hash IN Join\n (cost=2204.80..4809.31 rows=292 width=202) (actual\ntime=12.856..283.916 rows=15702 loops=1)\n\nI have no idea why, probably more knowledgeable guys will know more\nabout why. But overall, all other stats seem to be okay.\nWhat's the default_statistics_target setting in the postgresql set to?\n\nOne thing tho, what's the version, and platform.\n", "msg_date": "Tue, 8 Sep 2009 16:33:04 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing postgresql to use an index" }, { "msg_contents": "Grzegorz Jaśkiewicz <[email protected]> writes:\n\n> Learn it to not generate with \"WITH IN (subq)\", is this can be quite\n> slow on postgresql. Use joins instead.\n\nOK, I've split the query in two (can't make Django to generate JOIN in this\ncase) and it always uses index now. This immediately opened road for\nother optimizations. Thanks!\n\n>\n> looks like planner was wrong about rowcount in one place: Hash IN Join\n> (cost=2204.80..4809.31 rows=292 width=202) (actual\n> time=12.856..283.916 rows=15702 loops=1)\n>\n> I have no idea why, probably more knowledgeable guys will know more\n> about why. But overall, all other stats seem to be okay.\n> What's the default_statistics_target setting in the postgresql set to?\n>\n> One thing tho, what's the version, and platform.\n\nPostgreSQL 8.3.7, Ubuntu 8.10\n\n", "msg_date": "Tue, 08 Sep 2009 20:00:25 +0400", "msg_from": "Eugene Morozov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing postgresql to use an index" }, { "msg_contents": "Eugene Morozov <[email protected]> wrote: \n \n> Can anyone enlighten me? Should I set random_page_cost to 1.2\n> permanently (I feel this is not a really good idea in my case)?\n \nFor it to pass as many rows as it did in the time that it did, most or\nall of the \"reads\" were cached. If this is typically the case, at\nleast for the queries for which performance is most critical, your\nchange makes sense as a permanent setting. In fact, you might want to\ngo even further -- there have been many reports of people getting good\nperformance on fully-cached systems by dropping both random_page_cost\nand seq_page_cost to 0.1, so that the optimizer better estimates the\nrelative cost of \"disk access\" versus CPU-based operations.\n \n-Kevin\n", "msg_date": "Tue, 08 Sep 2009 11:30:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing postgresql to use an index" }, { "msg_contents": "On Tue, Sep 8, 2009 at 8:12 AM, Eugene Morozov<[email protected]> wrote:\n> Hello,\n>\n> I have a following query (autogenerated by Django)\n>\n> SELECT activity_activityevent.id, activity_activityevent.user_id, activity_activityevent.added_on\n> FROM activity_activityevent\n> WHERE activity_activityevent.user_id IN (\n>   SELECT U0.user_id\n>   FROM profile U0\n>   INNER JOIN profile_friends U1\n>   ON U0.user_id = U1.to_profile_id\n>   WHERE U1.from_profile_id = 5\n> )\n> ORDER BY activity_activityevent.added_on DESC LIMIT 10\n>\n>\n> When I run EXPLAIN ANALYZE with my default settings (seq scan is on,\n> random_page_cost = 4) I get the following result:\n>\n> Limit  (cost=4815.62..4815.65 rows=10 width=202) (actual time=332.938..332.977 rows=10 loops=1)\n>  ->  Sort  (cost=4815.62..4816.35 rows=292 width=202) (actual time=332.931..332.945 rows=10 loops=1)\n>        Sort Key: activity_activityevent.added_on\n>        Sort Method:  top-N heapsort  Memory: 19kB\n>        ->  Hash IN Join  (cost=2204.80..4809.31 rows=292 width=202) (actual time=12.856..283.916 rows=15702 loops=1)\n>              Hash Cond: (activity_activityevent.user_id = u0.user_id)\n>              ->  Seq Scan on activity_activityevent  (cost=0.00..2370.43 rows=61643 width=202) (actual time=0.020..126.129 rows=61643 loops=1)\n>              ->  Hash  (cost=2200.05..2200.05 rows=380 width=8) (actual time=12.777..12.777 rows=424 loops=1)\n>                    ->  Nested Loop  (cost=11.20..2200.05 rows=380 width=8) (actual time=0.260..11.594 rows=424 loops=1)\n>                          ->  Bitmap Heap Scan on profile_friends u1  (cost=11.20..62.95 rows=380 width=4) (actual time=0.228..1.202 rows=424 loops=1)\n>                                Recheck Cond: (from_profile_id = 5)\n>                                ->  Bitmap Index Scan on profile_friends_from_profile_id_key  (cost=0.00..11.10 rows=380 width=0) (actual time=0.208..0.208 rows=424 loops=1)\n>                                      Index Cond: (from_profile_id = 5)\n>                          ->  Index Scan using profile_pkey on profile u0  (cost=0.00..5.61 rows=1 width=4) (actual time=0.012..0.015 rows=1 loops=424)\n>                                Index Cond: (u0.user_id = u1.to_profile_id)\n> Total runtime: 333.190 ms\n>\n> But when I disable seq scan or set random_page_cost to 1.2 (higher\n> values doesn't change the plan), postgres starts using index and query\n> runs two times faster:\n>\n> Limit  (cost=9528.36..9528.38 rows=10 width=202) (actual time=165.047..165.090 rows=10 loops=1)\n>  ->  Sort  (cost=9528.36..9529.09 rows=292 width=202) (actual time=165.042..165.058 rows=10 loops=1)\n>        Sort Key: activity_activityevent.added_on\n>        Sort Method:  top-N heapsort  Memory: 19kB\n>        ->  Nested Loop  (cost=2201.00..9522.05 rows=292 width=202) (actual time=13.074..126.209 rows=15702 loops=1)\n>              ->  HashAggregate  (cost=2201.00..2204.80 rows=380 width=8) (actual time=12.996..14.131 rows=424 loops=1)\n>                    ->  Nested Loop  (cost=11.20..2200.05 rows=380 width=8) (actual time=0.263..11.665 rows=424 loops=1)\n>                          ->  Bitmap Heap Scan on profile_friends u1  (cost=11.20..62.95 rows=380 width=4) (actual time=0.232..1.181 rows=424 loops=1)\n>                                Recheck Cond: (from_profile_id = 5)\n>                                ->  Bitmap Index Scan on profile_friends_from_profile_id_key  (cost=0.00..11.10 rows=380 width=0) (actual time=0.210..0.210 rows=424 loops=1)\n>                                      Index Cond: (from_profile_id = 5)\n>                          ->  Index Scan using profile_pkey on profile u0  (cost=0.00..5.61 rows=1 width=4) (actual time=0.013..0.016 rows=1 loops=424)\n>                                Index Cond: (u0.user_id = u1.to_profile_id)\n>              ->  Index Scan using activity_activityevent_user_id on activity_activityevent  (cost=0.00..18.82 rows=35 width=202) (actual time=0.014..0.130 rows=37 loops=424)\n>                    Index Cond: (activity_activityevent.user_id = u0.user_id)\n> Total runtime: 165.323 ms\n>\n>\n> Can anyone enlighten me? Should I set random_page_cost to 1.2\n> permanently (I feel this is not a really good idea in my case)?\n\nOK, you need to look a little deeper at what's happening here. The\npgsql query planner looks at a lot of things to decide if to use seq\nscan or and index. If you look at your row estimates versus actual\nrows returned, you'll see they're off, sometimes by quite a bit.\nParticularly the ones near the top of your query plans. There are a\nfew things you can do to help out here. Increase default stats target\nand re-analyse, increase effective_cache_size to reflect the actual\nsize of data being cached by your OS / filesystem / pgsql, and then\nlowering random_page_cost.\n", "msg_date": "Tue, 8 Sep 2009 11:25:46 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing postgresql to use an index" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n\n> On Tue, Sep 8, 2009 at 8:12 AM, Eugene Morozov<[email protected]> wrote:\n> OK, you need to look a little deeper at what's happening here. The\n> pgsql query planner looks at a lot of things to decide if to use seq\n> scan or and index. If you look at your row estimates versus actual\n> rows returned, you'll see they're off, sometimes by quite a bit.\n> Particularly the ones near the top of your query plans. There are a\n> few things you can do to help out here. Increase default stats target\n> and re-analyse, increase effective_cache_size to reflect the actual\n> size of data being cached by your OS / filesystem / pgsql, and then\n> lowering random_page_cost.\n\nThanks to all who answered. Your answers were really helpful, I've\nsplit the query in two (couldn't make Django to use JOIN here) and was\nable to speed it up by a factor of 10!\nEugene\n\n", "msg_date": "Wed, 09 Sep 2009 09:34:37 +0400", "msg_from": "Eugene Morozov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing postgresql to use an index" } ]
[ { "msg_contents": "Folks,\n\nFor those of you who can't attend in person, we'll be streaming audio\nand video and having a chat for tonight's SFPUG meeting on how the\nplanner uses statistics.\n\nVideo:\n\nhttp://media.postgresql.org/sfpug/streaming\n\nChat:\n\nirc://irc.freenode.net/sfpug\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n", "msg_date": "Tue, 8 Sep 2009 10:30:21 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": true, "msg_subject": "Statistics and PostgreSQL: Streaming Webcast tonight" }, { "msg_contents": "On Tue, Sep 08, 2009 at 10:30:21AM -0700, David Fetter wrote:\n> Folks,\n> \n> For those of you who can't attend in person, we'll be streaming audio\n> and video and having a chat for tonight's SFPUG meeting on how the\n> planner uses statistics.\n> \n> Video:\n> \n> http://media.postgresql.org/sfpug/streaming\n> \n> Chat:\n> \n> irc://irc.freenode.net/sfpug\n\nAnd the important part is, the meeting starts at 7pm Pacific time.\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n", "msg_date": "Tue, 8 Sep 2009 10:32:53 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [sfpug] Statistics and PostgreSQL: Streaming Webcast tonight" }, { "msg_contents": "On Tue, Sep 08, 2009 at 10:32:53AM -0700, David Fetter wrote:\n> On Tue, Sep 08, 2009 at 10:30:21AM -0700, David Fetter wrote:\n> > Folks,\n> > \n> > For those of you who can't attend in person, we'll be streaming audio\n> > and video and having a chat for tonight's SFPUG meeting on how the\n> > planner uses statistics.\n> > \n> > Video:\n> > \n> > http://media.postgresql.org/sfpug/streaming\n> > \n> > Chat:\n> > \n> > irc://irc.freenode.net/sfpug\n> \n> And the important part is, the meeting starts at 7pm Pacific time.\n\nSorry about the confusion, folks. It's 7:30pm Pacific time.\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n", "msg_date": "Tue, 8 Sep 2009 11:17:11 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [sfpug] Statistics and PostgreSQL: Streaming Webcast\n\ttonight" } ]
[ { "msg_contents": "Hi all I have a large table (>2billion rows) that's partitioned by date based \non an epoch int value. We're running a select max(id) where id is the PK. I \nhave a PK index on each of the partitions, no indexes at all on the base \ntable.\n\nIf I hit a partition table directly I get an index scan as expected:\n\nexplain select max(id) from pwreport.bigtab_2009_09;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.06..0.07 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.06 rows=1 width=8)\n -> Index Scan Backward using bigtab_2009_09_pk on bigtab_2009_09 \n(cost=0.00..12403809.95 rows=205659919 width=8)\n Filter: (id IS NOT NULL)\n(5 rows)\n\n\nHowever if I hit the base table I get a sequential scan on every partition as \nopposed to index scans:\nexplain select max(id) from pwreport.bigtab; \n QUERY PLAN \n---------------------------------------------------------------------------------------------------- \n Aggregate (cost=27214318.67..27214318.68 rows=1 width=8) \n -> Append (cost=0.00..24477298.53 rows=1094808053 width=8) \n -> Seq Scan on bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_12 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_11 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_10 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_09 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_08 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_07 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_06 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_05 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_04 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_03 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_02 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2011_01 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_12 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_11 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_10 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_09 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_08 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_07 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_06 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_05 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_04 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_03 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_02 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2010_01 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2009_12 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2009_11 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2009_10 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2009_09 bigtab (cost=0.00..4599227.19 \nrows=205659919 width=8) \n -> Seq Scan on bigtab_2009_07 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2009_06 bigtab (cost=0.00..11.70 rows=170 \nwidth=8) \n -> Seq Scan on bigtab_2009_05 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2009_04 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2009_03 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2009_02 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2009_01 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2008_12 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2008_11 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2008_10 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2008_09 bigtab (cost=0.00..11.70 rows=170 \nwidth=8)\n -> Seq Scan on bigtab_2009_08 bigtab (cost=0.00..19877615.04 \nrows=889141504 width=8)\n(43 rows)\n\n\nThoughts?\n\n\nThanks in advance...\n\n\nHi all I have a large table (>2billion rows) that's partitioned by date based on an epoch int value. We're running a select max(id) where id is the PK. I have a PK index on each of the partitions, no indexes at all on the base table.\nIf I hit a partition table directly I get an index scan as expected:\nexplain select max(id) from pwreport.bigtab_2009_09;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.06..0.07 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.06 rows=1 width=8)\n -> Index Scan Backward using bigtab_2009_09_pk on bigtab_2009_09 (cost=0.00..12403809.95 rows=205659919 width=8)\n Filter: (id IS NOT NULL)\n(5 rows)\nHowever if I hit the base table I get a sequential scan on every partition as opposed to index scans:\nexplain select max(id) from pwreport.bigtab; \n QUERY PLAN \n---------------------------------------------------------------------------------------------------- \n Aggregate (cost=27214318.67..27214318.68 rows=1 width=8) \n -> Append (cost=0.00..24477298.53 rows=1094808053 width=8) \n -> Seq Scan on bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_12 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_11 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_10 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_09 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_08 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_07 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_06 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_05 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_04 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_03 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_02 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2011_01 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_12 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_11 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_10 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_09 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_08 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_07 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_06 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_05 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_04 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_03 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_02 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2010_01 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2009_12 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2009_11 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2009_10 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2009_09 bigtab (cost=0.00..4599227.19 rows=205659919 width=8) \n -> Seq Scan on bigtab_2009_07 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2009_06 bigtab (cost=0.00..11.70 rows=170 width=8) \n -> Seq Scan on bigtab_2009_05 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2009_04 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2009_03 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2009_02 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2009_01 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2008_12 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2008_11 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2008_10 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2008_09 bigtab (cost=0.00..11.70 rows=170 width=8)\n -> Seq Scan on bigtab_2009_08 bigtab (cost=0.00..19877615.04 rows=889141504 width=8)\n(43 rows)\nThoughts?\nThanks in advance...", "msg_date": "Tue, 8 Sep 2009 14:53:18 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning max() sql not using index" }, { "msg_contents": "Kevin Kempter wrote:\n> Hi all I have a large table (>2billion rows) that's partitioned by date based \n> on an epoch int value. We're running a select max(id) where id is the PK. I \n> have a PK index on each of the partitions, no indexes at all on the base \n> table.\n> \n> If I hit a partition table directly I get an index scan as expected:\n\nThe planner isn't smart enough to create the plan you're expecting.\nThere was discussion and even a patch posted recently about that:\n\nhttp://archives.postgresql.org/pgsql-hackers/2009-07/msg01115.php\n\nIt seems the thread petered out, but the concept seems sane.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 09 Sep 2009 13:05:22 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning max() sql not using index" }, { "msg_contents": "In case you aren't comfortable running unreleased planner patches from \npgsql-hackers, a workaround was discussed on this list recently:\n\nhttp://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php\n\nOn Wed, 09 Sep 2009 06:05:22 -0400, Heikki Linnakangas \n<[email protected]> wrote:\n\n> Kevin Kempter wrote:\n>> Hi all I have a large table (>2billion rows) that's partitioned by date \n>> based\n>> on an epoch int value. We're running a select max(id) where id is the \n>> PK. I\n>> have a PK index on each of the partitions, no indexes at all on the base\n>> table.\n>>\n>> If I hit a partition table directly I get an index scan as expected:\n>\n> The planner isn't smart enough to create the plan you're expecting.\n> There was discussion and even a patch posted recently about that:\n>\n> http://archives.postgresql.org/pgsql-hackers/2009-07/msg01115.php\n>\n> It seems the thread petered out, but the concept seems sane.\n>\n\n\n\n-- \nUsing Opera's revolutionary e-mail client: http://www.opera.com/mail/\n", "msg_date": "Wed, 09 Sep 2009 09:56:53 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning max() sql not using index" }, { "msg_contents": "On Wednesday 09 September 2009 07:56:53 Kenneth Cox wrote:\n> In case you aren't comfortable running unreleased planner patches from\n> pgsql-hackers, a workaround was discussed on this list recently:\n>\n> http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php\n>\n> On Wed, 09 Sep 2009 06:05:22 -0400, Heikki Linnakangas\n>\n> <[email protected]> wrote:\n> > Kevin Kempter wrote:\n> >> Hi all I have a large table (>2billion rows) that's partitioned by date\n> >> based\n> >> on an epoch int value. We're running a select max(id) where id is the\n> >> PK. I\n> >> have a PK index on each of the partitions, no indexes at all on the base\n> >> table.\n> >>\n> >> If I hit a partition table directly I get an index scan as expected:\n> >\n> > The planner isn't smart enough to create the plan you're expecting.\n> > There was discussion and even a patch posted recently about that:\n> >\n> > http://archives.postgresql.org/pgsql-hackers/2009-07/msg01115.php\n> >\n> > It seems the thread petered out, but the concept seems sane.\n\nExcellent! thanks this is quite helpful\n", "msg_date": "Wed, 9 Sep 2009 08:29:20 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning max() sql not using index" } ]
[ { "msg_contents": "Hi,\n\nI am running PostgreSQL-8.4.0 on a SMP Server which has 32 processors \n(32X2=64 cores). I am working on database parallelism and I need to do \nprofiling in order to find the relevant parts to parallelize. I wrote \n15 queries which are performing select, sort, join, and aggregate \nfunctions and I want to profile these queries and try to understand \nwhere postgresql spends more time and then I will try to parallelize \nsource code(time spending parts) by using openMP. But I need guidance \nabout profiling tool, which is the best tool to profile queries. I \nused gprof but I want to profile with a more advanced tool. Options \nare oprofile and valgrind.\n\n1) I can not decide which best suits for query profiling. oprofile or \nvalgrind?\n\n2) Also for both I can not find documentation about profiling steps.\n\nPlease help,\nReydan\n", "msg_date": "Wed, 9 Sep 2009 13:15:08 +0300", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Best Profiler for PostgreSQL" }, { "msg_contents": "On Wed, Sep 9, 2009 at 6:15 AM, Reydan Cankur <[email protected]> wrote:\n> Hi,\n>\n> I am running PostgreSQL-8.4.0 on a SMP Server which has 32 processors\n> (32X2=64 cores). I am working on database parallelism and I need to do\n> profiling in order to find the relevant parts to parallelize. I wrote 15\n> queries which are performing select, sort, join, and aggregate functions and\n>  I want to  profile these queries and try to understand where postgresql\n> spends more time and then I will try to parallelize source code(time\n> spending parts) by using openMP. But I need guidance about profiling tool,\n> which is the best tool to profile queries. I used gprof but I want to\n> profile with a more advanced tool. Options are oprofile and valgrind.\n>\n> 1) I can not decide which best suits for query profiling. oprofile or\n> valgrind?\n>\n> 2) Also for both I can not find documentation about profiling steps.\n\nYou might want to start with EXPLAIN ANALYZE. That will tell you\nwhere the time for each query is being spent.\n\n...Robert\n", "msg_date": "Thu, 10 Sep 2009 11:30:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best Profiler for PostgreSQL" } ]
[ { "msg_contents": "In the following query, We are seeing a sub-optimal plan being chosen. The\nfollowing results are after running the query several times (after each\nchange).\n\ndev1=# select version();\n version\n-----------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.13 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20070626 (Red Hat 4.1.2-14)\n\n\n\ndev1=# EXPLAIN ANALYZE SELECT SUM (revenue) as revenue FROM statsdaily WHERE\nofid = 38 AND date >= '2009-09-01' AND date <= '2999-01-01';\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=11796.19..11796.20 rows=1 width=8) (actual\ntime=28.598..28.599 rows=1 loops=1)\n -> Index Scan using statsdaily_unique_idx on statsdaily\n(cost=0.00..11783.65 rows=5017 width=8) (actual time=0.043..25.374 rows=3125\nloops=1)\n Index Cond: ((date >= '2009-09-01'::date) AND (date <=\n'2999-01-01'::date) AND (ofid = 38))\n Total runtime: 28.650 ms\n\n\ndev1=# set enable_indexscan to off;\n\n\ndev1=# EXPLAIN ANALYZE SELECT SUM (revenue) as revenue FROM statsdaily WHERE\nofid = '38' AND date >= '2009-09-01' AND date <= '2999-01-01';\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=13153.47..13153.48 rows=1 width=8) (actual\ntime=7.746..7.747 rows=1 loops=1)\n -> Bitmap Heap Scan on statsdaily (cost=3622.22..13140.92 rows=5017\nwidth=8) (actual time=0.941..4.865 rows=3125 loops=1)\n Recheck Cond: ((ofid = 38) AND (date >= '2009-09-01'::date))\n Filter: (date <= '2999-01-01'::date)\n -> Bitmap Index Scan on statsdaily_ofid_sept2009_idx\n(cost=0.00..3620.97 rows=5046 width=0) (actual time=0.551..0.551 rows=3125\nloops=1)\n Index Cond: (ofid = 38)\n Total runtime: 7.775 ms\n\n\ndefault_statistics_target = 100 (tried with 500, no change). Vacuum analyzed\nbefore initial query, and after each change to default_statistics_target.\n\n\nThe same query, with a different \"ofid\", will occasionally get the more\noptimal plan -- I assume that the distribution of data is the differentiator\nthere.\n\nIs there any other data I can provide to shed some light on this?\n\nThanks!\n\nIn the following query, We are seeing a sub-optimal plan being chosen. The following results are after running the query several times (after each change).dev1=# select version();                                                  version\n----------------------------------------------------------------------------------------------------------- PostgreSQL 8.2.13 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070626 (Red Hat 4.1.2-14)\ndev1=# EXPLAIN ANALYZE SELECT SUM (revenue) as revenue FROM statsdaily WHERE ofid = 38 AND date >= '2009-09-01' AND date <= '2999-01-01';                                                                       QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=11796.19..11796.20 rows=1 width=8) (actual time=28.598..28.599 rows=1 loops=1)\n   ->  Index Scan using statsdaily_unique_idx on statsdaily  (cost=0.00..11783.65 rows=5017 width=8) (actual time=0.043..25.374 rows=3125 loops=1)         Index Cond: ((date >= '2009-09-01'::date) AND (date <= '2999-01-01'::date) AND (ofid = 38))\n Total runtime: 28.650 msdev1=# set enable_indexscan to off;dev1=# EXPLAIN ANALYZE SELECT SUM (revenue) as revenue FROM statsdaily WHERE ofid = '38' AND date >= '2009-09-01' AND date <= '2999-01-01';\n                                                                        QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=13153.47..13153.48 rows=1 width=8) (actual time=7.746..7.747 rows=1 loops=1)   ->  Bitmap Heap Scan on statsdaily  (cost=3622.22..13140.92 rows=5017 width=8) (actual time=0.941..4.865 rows=3125 loops=1)\n         Recheck Cond: ((ofid = 38) AND (date >= '2009-09-01'::date))         Filter: (date <= '2999-01-01'::date)         ->  Bitmap Index Scan on statsdaily_ofid_sept2009_idx  (cost=0.00..3620.97 rows=5046 width=0) (actual time=0.551..0.551 rows=3125 loops=1)\n               Index Cond: (ofid = 38) Total runtime: 7.775 msdefault_statistics_target = 100 (tried with 500, no change). Vacuum analyzed before initial query, and after each change to default_statistics_target.\nThe same query, with a different \"ofid\", will occasionally get the more optimal plan -- I assume that the distribution of data is the differentiator there.Is there any other data I can provide to shed some light on this?\nThanks!", "msg_date": "Thu, 10 Sep 2009 07:34:48 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Sub-optimal plan chosen" }, { "msg_contents": "> default_statistics_target = 100 (tried with 500, no change). Vacuum\n> analyzed\n> before initial query, and after each change to default_statistics_target.\n\nModifying the statistics target is useful only if the estimates are\nseriously off, which is not your case - so it won't help, at least not\nreliably.\n\n> The same query, with a different \"ofid\", will occasionally get the more\n> optimal plan -- I assume that the distribution of data is the\n> differentiator\n> there.\n\nYes, the difference between costs of the two plans is quite small (11796\nvs. 13153) so it's very sensible to data distribution.\n\n> Is there any other data I can provide to shed some light on this?\n\nYou may try to play with the 'cost' constants - see this:\n\nhttp://www.postgresql.org/docs/8.4/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n\nYou just need to modify them so that the bitmap index scan / bitmap heap\nscan is prefered to plain index scan.\n\nJust be careful - if set in the postgresql.conf, it affects all the\nqueries and may cause serious problems with other queries. So it deserves\nproper testing ...\n\nregards\nTomas\n\n", "msg_date": "Thu, 10 Sep 2009 16:57:39 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "Hi Tomas,\n\n2009/9/10 <[email protected]>\n\n> > default_statistics_target = 100 (tried with 500, no change). Vacuum\n> > analyzed\n> > before initial query, and after each change to default_statistics_target.\n>\n> Modifying the statistics target is useful only if the estimates are\n> seriously off, which is not your case - so it won't help, at least not\n> reliably.\n>\n> > The same query, with a different \"ofid\", will occasionally get the more\n> > optimal plan -- I assume that the distribution of data is the\n> > differentiator\n> > there.\n>\n> Yes, the difference between costs of the two plans is quite small (11796\n> vs. 13153) so it's very sensible to data distribution.\n>\n> > Is there any other data I can provide to shed some light on this?\n>\n> You may try to play with the 'cost' constants - see this:\n>\n>\n> http://www.postgresql.org/docs/8.4/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n>\n> You just need to modify them so that the bitmap index scan / bitmap heap\n> scan is prefered to plain index scan.\n>\n> Just be careful - if set in the postgresql.conf, it affects all the\n> queries and may cause serious problems with other queries. So it deserves\n> proper testing ...\n>\n> regards\n> Tomas\n>\n\n\nPlaying around with seq_page_cost (1) and random_page_cost (1), I can get\nthe correct index selected. Applying those same settings to our production\nserver does not produce the optimal plan, though.\n\nHi Tomas,2009/9/10 <[email protected]>\n> default_statistics_target = 100 (tried with 500, no change). Vacuum\n> analyzed\n> before initial query, and after each change to default_statistics_target.\n\nModifying the statistics target is useful only if the estimates are\nseriously off, which is not your case - so it won't help, at least not\nreliably.\n\n> The same query, with a different \"ofid\", will occasionally get the more\n> optimal plan -- I assume that the distribution of data is the\n> differentiator\n> there.\n\nYes, the difference between costs of the two plans is quite small (11796\nvs. 13153) so it's very sensible to data distribution.\n\n> Is there any other data I can provide to shed some light on this?\n\nYou may try to play with the 'cost' constants - see this:\n\nhttp://www.postgresql.org/docs/8.4/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n\nYou just need to modify them so that the bitmap index scan / bitmap heap\nscan is prefered to plain index scan.\n\nJust be careful - if set in the postgresql.conf, it affects all the\nqueries and may cause serious problems with other queries. So it deserves\nproper testing ...\n\nregards\nTomasPlaying around with seq_page_cost (1) and random_page_cost (1), I can\nget the correct index selected. Applying those same settings to our production server does not produce the optimal plan, though.", "msg_date": "Thu, 10 Sep 2009 08:25:34 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "> Playing around with seq_page_cost (1) and random_page_cost (1), I can get\n> the correct index selected. Applying those same settings to our production\n> server does not produce the optimal plan, though.\n\nI doubt setting seq_page_cost and random_page_cost to the same value is\nreasonable - random access is almost always more expensive than sequential\naccess.\n\nAnyway, post the EXPLAIN ANALYZE output from the production server. Don't\nforget there are other _cost values - try to modify them too, but I'm not\nsure how these values relate to the bitmap heap scan / bitmap index plans.\n\nregards\nTomas\n\n", "msg_date": "Thu, 10 Sep 2009 17:40:24 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "bricklen <[email protected]> writes:\n> Is there any other data I can provide to shed some light on this?\n\nThe table and index definitions?\n\nThe straight indexscan would probably win if the index column order\nwere ofid, date instead of date, ofid. I can't tell if you have\nany other queries for which the existing column order is preferable,\nthough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Sep 2009 11:43:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sub-optimal plan chosen " }, { "msg_contents": "On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\n\n> bricklen <[email protected]> writes:\n> > Is there any other data I can provide to shed some light on this?\n>\n> The table and index definitions?\n>\n> The straight indexscan would probably win if the index column order\n> were ofid, date instead of date, ofid. I can't tell if you have\n> any other queries for which the existing column order is preferable,\n> though.\n>\n> regards, tom lane\n>\n\n\nChanging the order of the WHERE predicates didn't help. The indexes are\nmostly defined as single-column indexes, with the exception of the\n\"statsdaily_unique_idx\" one:\n\nstatsdaily_id_pk PRIMARY KEY, btree (id)\nstatsdaily_unique_idx UNIQUE, btree (date, idaf, idsite, ofid, idcreative,\nidoptimizer)\nstatsdaily_date_idx btree (date)\nstatsdaily_ofid_idx btree (ofid)\nstatsdaily_ofid_sept2009_idx btree (ofid) WHERE date >= '2009-09-01'::date\n\nOn Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\nbricklen <[email protected]> writes:\n> Is there any other data I can provide to shed some light on this?\n\nThe table and index definitions?\n\nThe straight indexscan would probably win if the index column order\nwere ofid, date instead of date, ofid.  I can't tell if you have\nany other queries for which the existing column order is preferable,\nthough.\n\n                        regards, tom lane\nChanging the order of the WHERE predicates didn't help. The indexes are mostly defined as single-column indexes, with the exception of the \"statsdaily_unique_idx\" one:statsdaily_id_pk PRIMARY KEY, btree (id)\nstatsdaily_unique_idx UNIQUE, btree (date, idaf, idsite, ofid, idcreative, idoptimizer)statsdaily_date_idx btree (date)statsdaily_ofid_idx btree (ofid)statsdaily_ofid_sept2009_idx btree (ofid) WHERE date >= '2009-09-01'::date", "msg_date": "Thu, 10 Sep 2009 09:56:36 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "2009/9/10 <[email protected]>:\n>> Playing around with seq_page_cost (1) and random_page_cost (1), I can get\n>> the correct index selected. Applying those same settings to our production\n>> server does not produce the optimal plan, though.\n>\n> I doubt setting seq_page_cost and random_page_cost to the same value is\n> reasonable - random access is almost always more expensive than sequential\n> access.\n\nIf the data figures to be read from the OS cache, it's very\nreasonable, and the right value is somewhere in the 0.05 - 0.10 range.\n\n...Robert\n", "msg_date": "Thu, 10 Sep 2009 12:57:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "On Thu, Sep 10, 2009 at 9:57 AM, Robert Haas <[email protected]> wrote:\n\n> 2009/9/10 <[email protected]>:\n> >> Playing around with seq_page_cost (1) and random_page_cost (1), I can\n> get\n> >> the correct index selected. Applying those same settings to our\n> production\n> >> server does not produce the optimal plan, though.\n> >\n> > I doubt setting seq_page_cost and random_page_cost to the same value is\n> > reasonable - random access is almost always more expensive than\n> sequential\n> > access.\n>\n> If the data figures to be read from the OS cache, it's very\n> reasonable, and the right value is somewhere in the 0.05 - 0.10 range.\n>\n>\nFor the most part, it will indeed be cached. Thanks for the tip on the\nvalues.\n\nOn Thu, Sep 10, 2009 at 9:57 AM, Robert Haas <[email protected]> wrote:\n2009/9/10  <[email protected]>:\n>> Playing around with seq_page_cost (1) and random_page_cost (1), I can get\n>> the correct index selected. Applying those same settings to our production\n>> server does not produce the optimal plan, though.\n>\n> I doubt setting seq_page_cost and random_page_cost to the same value is\n> reasonable - random access is almost always more expensive than sequential\n> access.\n\nIf the data figures to be read from the OS cache, it's very\nreasonable, and the right value is somewhere in the 0.05 - 0.10 range.\nFor the most part, it will indeed be cached. Thanks for the tip on the values.", "msg_date": "Thu, 10 Sep 2009 10:01:10 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "On Thu, Sep 10, 2009 at 12:56 PM, bricklen <[email protected]> wrote:\n> On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\n>>\n>> bricklen <[email protected]> writes:\n>> > Is there any other data I can provide to shed some light on this?\n>>\n>> The table and index definitions?\n>>\n>> The straight indexscan would probably win if the index column order\n>> were ofid, date instead of date, ofid.  I can't tell if you have\n>> any other queries for which the existing column order is preferable,\n>> though.\n>>\n>>                        regards, tom lane\n>\n>\n> Changing the order of the WHERE predicates didn't help.\n\nHe's talking about the index definition, not the WHERE clause. The\norder of the WHERE clause is totally irrelevant.\n\n...Robert\n", "msg_date": "Thu, 10 Sep 2009 13:02:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "On Thu, Sep 10, 2009 at 10:02 AM, Robert Haas <[email protected]> wrote:\n\n> On Thu, Sep 10, 2009 at 12:56 PM, bricklen <[email protected]> wrote:\n> > On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\n> >>\n> >> bricklen <[email protected]> writes:\n> >> > Is there any other data I can provide to shed some light on this?\n> >>\n> >> The table and index definitions?\n> >>\n> >> The straight indexscan would probably win if the index column order\n> >> were ofid, date instead of date, ofid. I can't tell if you have\n> >> any other queries for which the existing column order is preferable,\n> >> though.\n> >>\n> >> regards, tom lane\n> >\n> >\n> > Changing the order of the WHERE predicates didn't help.\n>\n> He's talking about the index definition, not the WHERE clause. The\n> order of the WHERE clause is totally irrelevant.\n>\n>\nAh, sorry, missed that.\n\nOn Thu, Sep 10, 2009 at 10:02 AM, Robert Haas <[email protected]> wrote:\nOn Thu, Sep 10, 2009 at 12:56 PM, bricklen <[email protected]> wrote:\n> On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\n>>\n>> bricklen <[email protected]> writes:\n>> > Is there any other data I can provide to shed some light on this?\n>>\n>> The table and index definitions?\n>>\n>> The straight indexscan would probably win if the index column order\n>> were ofid, date instead of date, ofid.  I can't tell if you have\n>> any other queries for which the existing column order is preferable,\n>> though.\n>>\n>>                        regards, tom lane\n>\n>\n> Changing the order of the WHERE predicates didn't help.\n\nHe's talking about the index definition, not the WHERE clause.  The\norder of the WHERE clause is totally irrelevant.\n\nAh, sorry, missed that.", "msg_date": "Thu, 10 Sep 2009 10:07:16 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "On Thu, Sep 10, 2009 at 10:07 AM, bricklen <[email protected]> wrote:\n\n> On Thu, Sep 10, 2009 at 10:02 AM, Robert Haas <[email protected]>wrote:\n>\n>> On Thu, Sep 10, 2009 at 12:56 PM, bricklen <[email protected]> wrote:\n>> > On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\n>> >>\n>> >> bricklen <[email protected]> writes:\n>> >> > Is there any other data I can provide to shed some light on this?\n>> >>\n>> >> The table and index definitions?\n>> >>\n>> >> The straight indexscan would probably win if the index column order\n>> >> were ofid, date instead of date, ofid. I can't tell if you have\n>> >> any other queries for which the existing column order is preferable,\n>> >> though.\n>> >>\n>> >> regards, tom lane\n>> >\n>> >\n>> > Changing the order of the WHERE predicates didn't help.\n>>\n>> He's talking about the index definition, not the WHERE clause. The\n>> order of the WHERE clause is totally irrelevant.\n>>\n>>\n> Ah, sorry, missed that.\n>\n\n\nI just created a new index as Tom said, and the query *does* use the new\nindex (where ofid precedes date in the definition).\n\nOn Thu, Sep 10, 2009 at 10:07 AM, bricklen <[email protected]> wrote:\nOn Thu, Sep 10, 2009 at 10:02 AM, Robert Haas <[email protected]> wrote:\n\nOn Thu, Sep 10, 2009 at 12:56 PM, bricklen <[email protected]> wrote:\n> On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane <[email protected]> wrote:\n>>\n>> bricklen <[email protected]> writes:\n>> > Is there any other data I can provide to shed some light on this?\n>>\n>> The table and index definitions?\n>>\n>> The straight indexscan would probably win if the index column order\n>> were ofid, date instead of date, ofid.  I can't tell if you have\n>> any other queries for which the existing column order is preferable,\n>> though.\n>>\n>>                        regards, tom lane\n>\n>\n> Changing the order of the WHERE predicates didn't help.\n\nHe's talking about the index definition, not the WHERE clause.  The\norder of the WHERE clause is totally irrelevant.\n\nAh, sorry, missed that.\nI just created a new index as Tom said, and the query *does* use the new index (where ofid precedes date in the definition).", "msg_date": "Thu, 10 Sep 2009 10:12:23 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sub-optimal plan chosen" }, { "msg_contents": "bricklen <[email protected]> writes:\n> I just created a new index as Tom said, and the query *does* use the new\n> index (where ofid precedes date in the definition).\n\nAnd is it indeed faster than the other alternatives?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Sep 2009 13:56:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sub-optimal plan chosen " }, { "msg_contents": "On Thu, Sep 10, 2009 at 10:56 AM, Tom Lane <[email protected]> wrote:\n\n> bricklen <[email protected]> writes:\n> > I just created a new index as Tom said, and the query *does* use the new\n> > index (where ofid precedes date in the definition).\n>\n> And is it indeed faster than the other alternatives?\n>\n> regards, tom lane\n>\n\nAbout the same as the earlier, faster plan:\n\n Aggregate (cost=2342.79..2342.80 rows=1 width=8) (actual time=8.433..8.433\nrows=1 loops=1)\n -> Index Scan using statsdaily_ofid_date on statsdaily\n(cost=0.00..2330.61 rows=4873 width=8) (actual time=0.089..5.043 rows=3125\nloops=1)\n Index Cond: ((ofid = 38) AND (date >= '2009-09-01'::date) AND (date\n<= '2999-01-01'::date))\n Total runtime: 8.470 ms\n\nOn Thu, Sep 10, 2009 at 10:56 AM, Tom Lane <[email protected]> wrote:\nbricklen <[email protected]> writes:\n> I just created a new index as Tom said, and the query *does* use the new\n> index (where ofid precedes date in the definition).\n\nAnd is it indeed faster than the other alternatives?\n\n                        regards, tom lane\nAbout the same as the earlier, faster plan: Aggregate  (cost=2342.79..2342.80 rows=1 width=8) (actual time=8.433..8.433 rows=1 loops=1)   ->  Index Scan using statsdaily_ofid_date on statsdaily  (cost=0.00..2330.61 rows=4873 width=8) (actual time=0.089..5.043 rows=3125 loops=1)\n         Index Cond: ((ofid = 38) AND (date >= '2009-09-01'::date) AND (date <= '2999-01-01'::date)) Total runtime: 8.470 ms", "msg_date": "Thu, 10 Sep 2009 11:02:48 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sub-optimal plan chosen" } ]
[ { "msg_contents": "Is it faster to use a Stored Proc that returns a Type or has Out Parameters\nthen a View? Views are easier to maintain I feel. I remember testing this\naround 8.0 days and the view seemed slower with a lot of data.\n\nIs it faster to use a Stored Proc that returns a Type or has Out Parameters then a View?  Views are easier to maintain I feel.  I remember testing this around 8.0 days and the view seemed slower with a lot of data.", "msg_date": "Fri, 11 Sep 2009 11:46:10 -0400", "msg_from": "Jason Tesser <[email protected]>", "msg_from_op": true, "msg_subject": "View vs Stored Proc Performance" }, { "msg_contents": "On Fri, Sep 11, 2009 at 11:46 AM, Jason Tesser <[email protected]> wrote:\n> Is it faster to use a Stored Proc that returns a Type or has Out Parameters\n> then a View?  Views are easier to maintain I feel.  I remember testing this\n> around 8.0 days and the view seemed slower with a lot of data.\n\nfor the most part, a view can be faster and would rarely be slower.\nViews are like C macros for you query...they are expanded first and\nthen planned. Functions (except for very simple ones) are black boxes\nto the planner and can materially hurt query performance in common\ncases. The only case where a function would win is when dealing with\nconner case planner issues (by forcing a nestloop for example).\n\nmerlin\n", "msg_date": "Fri, 11 Sep 2009 13:37:57 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "OK so in my case I have a Person, Email, Phone and Address table. I want to\nreturn the Person and an Array of the others. so my return type would be\nsomething like Person, Email[], Phone[], Address[]\n\nWhen passed a personId.\n\nAre you saying this is better in a view. Create a view that can return that\nas oppessed to 1. defining a type for a function to return or 2. a function\nthat returns 4 out parameters (Person, Address[] ,....)\n\nThanks\n\nOn Fri, Sep 11, 2009 at 1:37 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Sep 11, 2009 at 11:46 AM, Jason Tesser <[email protected]>\n> wrote:\n> > Is it faster to use a Stored Proc that returns a Type or has Out\n> Parameters\n> > then a View? Views are easier to maintain I feel. I remember testing\n> this\n> > around 8.0 days and the view seemed slower with a lot of data.\n>\n> for the most part, a view can be faster and would rarely be slower.\n> Views are like C macros for you query...they are expanded first and\n> then planned. Functions (except for very simple ones) are black boxes\n> to the planner and can materially hurt query performance in common\n> cases. The only case where a function would win is when dealing with\n> conner case planner issues (by forcing a nestloop for example).\n>\n> merlin\n>\n\nOK so in my case I have a Person, Email, Phone and Address table.  I want to return the Person and an Array of the others. so my return type would be something like Person, Email[], Phone[], Address[]  When passed a personId.  \nAre you saying this is better in a view.  Create a view that can return that as oppessed to 1. defining a type for a function to return or 2. a function that returns 4 out parameters (Person, Address[] ,....)Thanks\nOn Fri, Sep 11, 2009 at 1:37 PM, Merlin Moncure <[email protected]> wrote:\nOn Fri, Sep 11, 2009 at 11:46 AM, Jason Tesser <[email protected]> wrote:\n> Is it faster to use a Stored Proc that returns a Type or has Out Parameters\n> then a View?  Views are easier to maintain I feel.  I remember testing this\n> around 8.0 days and the view seemed slower with a lot of data.\n\nfor the most part, a view can be faster and would rarely be slower.\nViews are like C macros for you query...they are expanded first and\nthen planned.  Functions (except for very simple ones) are black boxes\nto the planner and can materially hurt query performance in common\ncases.  The only case where a function would win is when dealing with\nconner case planner issues (by forcing a nestloop for example).\n\nmerlin", "msg_date": "Fri, 11 Sep 2009 14:56:31 -0400", "msg_from": "Jason Tesser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "On Fri, Sep 11, 2009 at 2:56 PM, Jason Tesser <[email protected]> wrote:\n> OK so in my case I have a Person, Email, Phone and Address table.  I want to\n> return the Person and an Array of the others. so my return type would be\n> something like Person, Email[], Phone[], Address[]\n>\n> When passed a personId.\n>\n> Are you saying this is better in a view.  Create a view that can return that\n> as oppessed to 1. defining a type for a function to return or 2. a function\n> that returns 4 out parameters (Person, Address[] ,....)\n\nif you are using 8.3+ and are wiling to make a composite type:\n\ncreate table person_t(email text, phone text, address text);\n\nselect person_id, array_agg((email, phone, address)::person_t) from\nperson group by 1;\n\nor, detail fields are in another table:\n\nselect person_id, (select array(select (email, phone,\naddress)::person_t) from detail where person_id = p.person_id) from\nperson_t;\n\nmerlin\n", "msg_date": "Fri, 11 Sep 2009 17:01:05 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "Right what I was wondering is is this better done in a view? or a stored\nproc? I am guessing based on your initial response the view is better\nperformance. These are the types of queries I will be doing though.\n\nOn Fri, Sep 11, 2009 at 5:01 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Sep 11, 2009 at 2:56 PM, Jason Tesser <[email protected]>\n> wrote:\n> > OK so in my case I have a Person, Email, Phone and Address table. I want\n> to\n> > return the Person and an Array of the others. so my return type would be\n> > something like Person, Email[], Phone[], Address[]\n> >\n> > When passed a personId.\n> >\n> > Are you saying this is better in a view. Create a view that can return\n> that\n> > as oppessed to 1. defining a type for a function to return or 2. a\n> function\n> > that returns 4 out parameters (Person, Address[] ,....)\n>\n> if you are using 8.3+ and are wiling to make a composite type:\n>\n> create table person_t(email text, phone text, address text);\n>\n> select person_id, array_agg((email, phone, address)::person_t) from\n> person group by 1;\n>\n> or, detail fields are in another table:\n>\n> select person_id, (select array(select (email, phone,\n> address)::person_t) from detail where person_id = p.person_id) from\n> person_t;\n>\n> merlin\n>\n\nRight what I was wondering is is this better done in a view? or a stored proc?   I am guessing based on your initial response the view is better performance.  These are the types of queries I will be doing though.\nOn Fri, Sep 11, 2009 at 5:01 PM, Merlin Moncure <[email protected]> wrote:\nOn Fri, Sep 11, 2009 at 2:56 PM, Jason Tesser <[email protected]> wrote:\n> OK so in my case I have a Person, Email, Phone and Address table.  I want to\n> return the Person and an Array of the others. so my return type would be\n> something like Person, Email[], Phone[], Address[]\n>\n> When passed a personId.\n>\n> Are you saying this is better in a view.  Create a view that can return that\n> as oppessed to 1. defining a type for a function to return or 2. a function\n> that returns 4 out parameters (Person, Address[] ,....)\n\nif you are using 8.3+ and are wiling to make a composite type:\n\ncreate table person_t(email text, phone text, address text);\n\nselect person_id, array_agg((email, phone, address)::person_t) from\nperson group by 1;\n\nor, detail fields are in another table:\n\nselect person_id, (select array(select (email, phone,\naddress)::person_t) from detail where person_id = p.person_id) from\nperson_t;\n\nmerlin", "msg_date": "Fri, 11 Sep 2009 17:27:24 -0400", "msg_from": "Jason Tesser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "On Fri, Sep 11, 2009 at 5:27 PM, Jason Tesser <[email protected]> wrote:\n> Right what I was wondering is is this better done in a view? or a stored\n> proc?   I am guessing based on your initial response the view is better\n> performance.  These are the types of queries I will be doing though.\n>\n\nin performance terms the view should be faster if you are doing things\nlike joining the result to another table...the planner can see\n'through' the view, etc. in a function, the result is fetched first\nand materialized without looking at the rest of the query. the actual\nmechanism you use to build the arrays is likely going to be more\nimportant than anything else.\n\nmerlin\n", "msg_date": "Fri, 11 Sep 2009 17:53:20 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> like joining the result to another table...the planner can see\n> 'through' the view, etc. in a function, the result is fetched first\n> and materialized without looking at the rest of the query. \n\nI though the planner would \"see through\" SQL language functions and\ninline them when possible, so they often can make for parametrized\nviews...\n\nRegards,\n-- \ndim\n", "msg_date": "Sat, 12 Sep 2009 13:51:29 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "On Sat, Sep 12, 2009 at 7:51 AM, Dimitri Fontaine\n<[email protected]> wrote:\n> Merlin Moncure <[email protected]> writes:\n>> like joining the result to another table...the planner can see\n>> 'through' the view, etc.  in a function, the result is fetched first\n>> and materialized without looking at the rest of the query.\n>\n> I though the planner would \"see through\" SQL language functions and\n> inline them when possible, so they often can make for parametrized\n> views...\n\nIt can happen for simple functions but often it will not. For views\nit always happens.\n\nmerlin\n", "msg_date": "Sat, 12 Sep 2009 09:22:50 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "Merlin Moncure wrote:\n> On Sat, Sep 12, 2009 at 7:51 AM, Dimitri Fontaine\n> <[email protected]> wrote:\n>> Merlin Moncure <[email protected]> writes:\n>>> like joining the result to another table...the planner can see\n>>> 'through' the view, etc. in a function, the result is fetched first\n>>> and materialized without looking at the rest of the query.\n>> I though the planner would \"see through\" SQL language functions and\n>> inline them when possible, so they often can make for parametrized\n>> views...\n> \n> It can happen for simple functions but often it will not. For views\n> it always happens.\n\nAre functions in language 'sql' handled differently than those of \nlanguage 'plpgsql'?\n\nI think they're not so in any case a function will behave as a black box \nwith regards to the planner and optimizer (and views are always \n'transparent').\n\n", "msg_date": "Tue, 15 Sep 2009 17:14:06 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" }, { "msg_contents": "Ivan Voras <[email protected]> writes:\n> Are functions in language 'sql' handled differently than those of \n> language 'plpgsql'?\n\nYes.\n\n> I think they're not so in any case a function will behave as a black box \n> with regards to the planner and optimizer (and views are always \n> 'transparent').\n\nNo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Sep 2009 11:26:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance " }, { "msg_contents": "2009/9/15 Tom Lane <[email protected]>:\n> Ivan Voras <[email protected]> writes:\n>> Are functions in language 'sql' handled differently than those of\n>> language 'plpgsql'?\n>\n> Yes.\n>\n>> I think they're not so in any case a function will behave as a black box\n>> with regards to the planner and optimizer (and views are always\n>> 'transparent').\n>\n> No.\n\nThanks! This is interesting information!\n\n-- \nf+rEnSIBITAhITAhLR1nM9F4cIs5KJrhbcsVtUIt7K1MhWJy1A==\n", "msg_date": "Tue, 15 Sep 2009 18:12:38 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View vs Stored Proc Performance" } ]
[ { "msg_contents": "Hey folks,\n\nEarlier in the week I wrote a Munin plugin that takes the \"await\" and\n\"average queue length\" fields from \"iostat -x\" and graphs them.\n\nThis seems rather odd to me :\n\nhttp://picasaweb.google.ca/alan.mckay/Work#5380253477470243954\n\nThat is Qlen. And await looks similar\n\nhttp://picasaweb.google.ca/alan.mckay/Work#5380254090296723426\n\nThis is on an IBM 3650 with the 2 main \"internal\" drives set up in a\nmirrored config, and sdb are the 6 other drives set up in a RAID5 with\na global hot spare. (4 drives in array + 1 to make it RAID5 + global\nhot spare)\n\nWe aren't seeing any performance problems on this per-se. But that\njust seems like a really odd graph to me. Can anyone explain it? In\nparticular, how regular it is?\n\ncheers,\n-Alan\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 11 Sep 2009 12:58:33 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "odd iostat graph" }, { "msg_contents": "\nOn 9/11/09 9:58 AM, \"Alan McKay\" <[email protected]> wrote:\n\n> Hey folks,\n> \n> Earlier in the week I wrote a Munin plugin that takes the \"await\" and\n> \"average queue length\" fields from \"iostat -x\" and graphs them.\n> \n> This seems rather odd to me :\n> \n> http://picasaweb.google.ca/alan.mckay/Work#5380253477470243954\n> \n> That is Qlen. And await looks similar\n> \n> http://picasaweb.google.ca/alan.mckay/Work#5380254090296723426\n> \n> This is on an IBM 3650 with the 2 main \"internal\" drives set up in a\n> mirrored config, and sdb are the 6 other drives set up in a RAID5 with\n> a global hot spare. (4 drives in array + 1 to make it RAID5 + global\n> hot spare)\n> \n> We aren't seeing any performance problems on this per-se. But that\n> just seems like a really odd graph to me. Can anyone explain it? In\n> particular, how regular it is?\n\nMy guess is this is checkpoint related.\nFind out when your checkpoints are happening. The drops are most likely due\nto the sync() on all outstanding writes at the end of each checkpoint. The\nrise is probably small writes not yet on disk in the OS bufffer cache. If\nthis is due to checkpoints, I would expect a burst of write volume to disk\nat the same time of the drop.\n\nYou can change your logging settings to output the time of each checkpoint\nand some stats about them.\n\n> \n> cheers,\n> -Alan\n> \n> --\n> ³Don't eat anything you've ever seen advertised on TV²\n> - Michael Pollan, author of \"In Defense of Food\"\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 11 Sep 2009 11:02:35 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd iostat graph" }, { "msg_contents": "> My guess is this is checkpoint related.\n\nI'll assume \"checkpoint\" is a PG term that I'm not yet familiar with -\nwill query my DBA :-)\n\nIf this OS buffer cache, wouldn't that be cached an awfully long time?\n i.e. we're in big trouble if we get a bad crash?\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 11 Sep 2009 14:13:24 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: odd iostat graph" }, { "msg_contents": "Alan McKay <[email protected]> wrote:\n \n>> My guess is this is checkpoint related.\n> \n> I'll assume \"checkpoint\" is a PG term that I'm not yet familiar with\n-\n> will query my DBA :-)\n \nA checkpoint flushes all dirty PostgreSQL buffers to the OS and then\ntells the OS to write them to disk. The exact details of how that's\ndone and the timings involved vary with PostgreSQL version and\nconfiguration.\n \n> If this OS buffer cache, wouldn't that be cached an awfully long\n> time? i.e. we're in big trouble if we get a bad crash?\n \nBefore the commit of a database transaction is completed the changes\nwhich are involved in that are written to a write ahead log (WAL). A\ncheckpoint is also recorded in the WAL. On recovery from a crash,\nPostgreSQL replays all activity from committed transactions after the\nlast checkpoint; so nothing from a committed transaction is lost.\n \n-Kevin\n", "msg_date": "Fri, 11 Sep 2009 13:22:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd iostat graph" }, { "msg_contents": "On Fri, 11 Sep 2009, Alan McKay wrote:\n\n> We aren't seeing any performance problems on this per-se. But that\n> just seems like a really odd graph to me. Can anyone explain it? In\n> particular, how regular it is?\n\nWhat's the scale on the bottom there? The label says \"by week\" but the \nway your message is written makes me think it's actually a much smaller \ntime frame. If those valleys are around around five minutes apart, those \nare the checkpoints finishing; the shape of the graph is right for it to \nbe those.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 11 Sep 2009 16:28:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd iostat graph" }, { "msg_contents": "> What's the scale on the bottom there?  The label says \"by week\" but the way\n> your message is written makes me think it's actually a much smaller time\n> frame.  If those valleys are around around five minutes apart, those are the\n> checkpoints finishing; the shape of the graph is right for it to be those.\n\nNo, that's about 6 days of stats.\nThe numbers 04, 05, 06 are Sept 04, 05, 06 ...\n\nThis is the part I found oddest - that this takes place over such a\nhuge timeframe!\n\nMunin takes a snapshot every 5 minutes, and this graph shows it\naveraged over that timeframe.\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 11 Sep 2009 16:31:18 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: odd iostat graph" }, { "msg_contents": "On Fri, 11 Sep 2009, Alan McKay wrote:\n\n> Munin takes a snapshot every 5 minutes, and this graph shows it\n> averaged over that timeframe.\n\nThe default postgresql.conf puts a checkpoint every 5 minutes as well. \nIt's not going to be as exact as Munin's time though, they'll be just a \nlittle longer than that. I wonder if you're seeing the checkpoint pattern \nanyway. If the checkpoint period is fairly stable (which it could be in \nyour case), is slighly longer than the monitoring one, and each checkpoint \nhas the same basic shape (also usually true), each monitoring sample is \ngoing to trace out the usual checkpoint pattern by sampling a slightly \nlater point from successive ones. The shape of the curve you're seeing \nseems way too close to the standard checkpoint one to be just a \ncoincidence.\n\nIn any case, 5 minutes is unfortunately for you a really bad sampling \nperiod for a PG database because of the similarity to the checkpoint \ntiming. You have to take measurements at least once a minute to see the \ncheckpoints happening usefully at all. I think you're stuck generating an \nactivity graph some other way with finer resolution to get to the bottom \nof this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 11 Sep 2009 16:47:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd iostat graph" } ]
[ { "msg_contents": "Hi,\n\nWe have a very large, partitioned, table that we often need to query\nfrom new connections, but frequently with similar queries. We have\nconstraint exclusion on to take advantage of the partitioning. This also\nmakes query planning more expensive. As a result, the CPU is fully\nloaded, all the time, preparing queries, many of which have been\nprepared, identically, by other connections.\n\nIs there any way to have a persistent plan cache that remains between\nconnections? If such a mechanism existed, it would give us a great\nspeedup because the CPU's load for planning would be lightened\nsubstantially.\n\nThank you,\nJoshua Rubin", "msg_date": "Fri, 11 Sep 2009 14:16:49 -0600", "msg_from": "Joshua Rubin <[email protected]>", "msg_from_op": true, "msg_subject": "Persistent Plan Cache" }, { "msg_contents": "Joshua Rubin <[email protected]> writes:\n> We have a very large, partitioned, table that we often need to query\n> from new connections, but frequently with similar queries. We have\n> constraint exclusion on to take advantage of the partitioning. This also\n> makes query planning more expensive. As a result, the CPU is fully\n> loaded, all the time, preparing queries, many of which have been\n> prepared, identically, by other connections.\n\nIf you're depending on constraint exclusion, it's hard to see how plan\ncaching could help you at all. The generated plan needs to vary\ndepending on the actual WHERE-clause parameters.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Sep 2009 13:45:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Persistent Plan Cache " }, { "msg_contents": "Tom,\n\n> If you're depending on constraint exclusion, it's hard to see how plan\n> caching could help you at all. The generated plan needs to vary\n> depending on the actual WHERE-clause parameters.\n\nThank you for the reply.\n\nWe \"hardcode\" the parts of the where clause so that the prepared plan\nwill not vary among the possible partitions of the table. The only\nvalues that are bound would not affect the planner's choice of table.\n\nThanks,\nJoshua", "msg_date": "Sun, 13 Sep 2009 12:15:04 -0600", "msg_from": "Joshua Rubin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Persistent Plan Cache" }, { "msg_contents": "Tom Lane wrote:\n> Joshua Rubin <[email protected]> writes:\n>> We have a very large, partitioned, table that we often need to query\n>> from new connections, but frequently with similar queries. We have\n>> constraint exclusion on to take advantage of the partitioning. This also\n>> makes query planning more expensive. As a result, the CPU is fully\n>> loaded, all the time, preparing queries, many of which have been\n>> prepared, identically, by other connections.\n> \n> If you're depending on constraint exclusion, it's hard to see how plan\n> caching could help you at all. The generated plan needs to vary\n> depending on the actual WHERE-clause parameters.\n\nThat's what the OP really should've complained about. If we addressed\nthat, so that a generic plan was created that determines which child\ntables can be excluded at run time, there would be no need for the\npersistent plan cache.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 13 Sep 2009 22:40:42 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Persistent Plan Cache" }, { "msg_contents": "* Heikki Linnakangas ([email protected]) wrote:\n> That's what the OP really should've complained about. If we addressed\n> that, so that a generic plan was created that determines which child\n> tables can be excluded at run time, there would be no need for the\n> persistent plan cache.\n\nThis would definitely be nice to have.. I'm not sure what the level of\ndifficulty to do it is though.\n\n\tStephen", "msg_date": "Sun, 13 Sep 2009 17:54:51 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Persistent Plan Cache" }, { "msg_contents": "Joshua Rubin wrote:\n> We \"hardcode\" the parts of the where clause so that the prepared plan\n> will not vary among the possible partitions of the table. The only\n> values that are bound would not affect the planner's choice of table.\n\nThen you would benefit from using prepared statements in the client,\nand/or connection pooling to avoid having to re-prepare because of\nreconnecting.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 14 Sep 2009 08:01:04 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Persistent Plan Cache" }, { "msg_contents": "Hi,\n\nHeikki Linnakangas <[email protected]> writes:\n> Joshua Rubin wrote:\n>> We \"hardcode\" the parts of the where clause so that the prepared plan\n>> will not vary among the possible partitions of the table. The only\n>> values that are bound would not affect the planner's choice of table.\n>\n> Then you would benefit from using prepared statements in the client,\n> and/or connection pooling to avoid having to re-prepare because of\n> reconnecting.\n\nAnd you can do both in a transparent way (wrt pooling) using\npreprepare. The problem without it is for the application to know when\nthe statement is already prepared (that depends on whether the pooling\nsoftware will assign a new fresh connection or not). Using preprepare\nyour application skip the point and simply EXECUTE the already prepared\nstatements.\n\n http://preprepare.projects.postgresql.org/README.html\n http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/preprepare/preprepare/\n http://packages.debian.org/search?keywords=preprepare\n\nRegards,\n-- \ndim\n", "msg_date": "Mon, 14 Sep 2009 09:33:23 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Persistent Plan Cache" }, { "msg_contents": "Joshua Rubin wrote:\n> Hi,\n> \n> We have a very large, partitioned, table that we often need to query\n> from new connections, but frequently with similar queries. We have\n> constraint exclusion on to take advantage of the partitioning. This also\n> makes query planning more expensive. As a result, the CPU is fully\n> loaded, all the time, preparing queries, many of which have been\n> prepared, identically, by other connections.\n> \n> Is there any way to have a persistent plan cache that remains between\n> connections? If such a mechanism existed, it would give us a great\n> speedup because the CPU's load for planning would be lightened\n> substantially.\n\nIt's not a great solution, but depending on the specific client \ntechnology you use, it can done on the client-side. For example, I've \ndone it before in Java and PHP, and the principle extends to any \nenvironment that has any possibility of maintaining \"persistent\" \nconnections to the database, if you create a thin wrapper for the \nconnections.\n\nI have open-sourced such a wrapper for PHP, if you're interested.\n\n", "msg_date": "Mon, 14 Sep 2009 12:30:26 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Persistent Plan Cache" } ]
[ { "msg_contents": "Hi,\n\nI am running a relativ complex query on pg 8.3.5 and have (possible) \nwrong query plan.\nMy select :\n\nexplain analyze select d.ids from a_doc d join a_sklad s on \n(d.ids=s.ids_doc) join a_nomen n on (n.ids=s.ids_num) join a_nom_gr \nnmgr on (nmgr.ids=n.ids_grupa) join a_gar_prod_r gr on \n(gr.ids_a_sklad=s.ids and gr.sernum!='ok') join a_location l on \n(l.ids=s.ids_sklad) join a_klienti kl on (kl.ids=d.ids_ko) left \nouter join a_slujiteli sl on (sl.ids=d.ids_slu_ka) left outer join \na_slujiteli slu on (slu.ids=d.ids_slu_targ) where d.op=1 AND \nd.date_op >= 12320 AND d.date_op <= 12362 and n.num like '191%';\n\nIf I run the query without thle last part : and n.num like '191%' \nit work ok as speed ~ 30 sec on not very big db.\nIf I run the full query it take very long time to go ( i never waited \nto the end but it take > 60 min.)\n\nThe filed n.num is indexed and looks ok for me.\n\nI post explan analyze for query without n.num like '191%' and only \nexplain for query with n.num like '191%' :\n\nexplain analyze select d.ids from a_doc d join a_sklad s on \n(d.ids=s.ids_doc) join a_nomen n on (n.ids=s.ids_num) join a_nom_gr \nnmgr on (nmgr.ids=n.ids_grupa) join a_gar_prod_r gr on \n(gr.ids_a_sklad=s.ids and gr.sernum!='ok') join a_location l on \n(l.ids=s.ids_sklad) join a_klienti kl on (kl.ids=d.ids_ko) left \nouter join a_slujiteli sl on (sl.ids=d.ids_slu_ka) left outer join \na_slujiteli slu on (slu.ids=d.ids_slu_targ) where d.op=1 AND \nd.date_op >= 12320 AND d.date_op <= 12362 ;\n\n-------------\n Nested Loop Left Join (cost=345.50..190641.97 rows=1488 width=64) \n(actual time=446.905..30681.604 rows=636 loops=1)\n -> Nested Loop (cost=345.50..189900.14 rows=1488 width=128) \n(actual time=446.870..30676.472 rows=636 loops=1)\n -> Nested Loop (cost=345.50..189473.66 rows=1488 \nwidth=192) (actual time=427.522..30595.438 rows=636 loops=1)\n -> Nested Loop (cost=345.50..189049.52 rows=1488 \nwidth=192) (actual time=370.034..29609.647 rows=636 loops=1)\n -> Hash Join (cost=345.50..178565.42 rows=7204 \nwidth=256) (actual time=363.667..29110.776 rows=9900 loops=1)\n Hash Cond: (s.ids_sklad = l.ids)\n -> Nested Loop (cost=321.79..178442.65 \nrows=7204 width=320) (actual time=363.163..29096.591 rows=9900 loops=1)\n -> Hash Left Join \n(cost=321.79..80186.96 rows=4476 width=128) (actual \ntime=278.277..13852.952 rows=8191 loops=1)\n Hash Cond: (d.ids_slu_ka = sl.ids)\n -> Nested Loop \n(cost=223.17..80065.83 rows=4476 width=192) (actual \ntime=164.664..13731.739 rows=8191 loops=1)\n -> Bitmap Heap Scan on \na_doc d (cost=223.17..36926.67 rows=6598 width=256) (actual \ntime=121.306..587.479 rows=8191 loops=1)\n Recheck Cond: \n((date_op >= 12320) AND (date_op <= 12362))\n Filter: (op = 1)\n -> Bitmap Index \nScan on i_doc_date_op (cost=0.00..221.52 rows=10490 width=0) (actual \ntime=107.212..107.212 rows=11265 loops=1)\n Index Cond: \n((date_op >= 12320) AND (date_op <= 12362))\n -> Index Scan using \na_klienti_pkey on a_klienti kl (cost=0.00..6.53 rows=1 width=64) \n(actual time=1.598..1.602 rows=1 loops=8191)\n Index Cond: \n(kl.ids = d.ids_ko)\n -> Hash (cost=77.72..77.72 \nrows=1672 width=64) (actual time=113.591..113.591 rows=1672 loops=1)\n -> Seq Scan on \na_slujiteli sl (cost=0.00..77.72 rows=1672 width=64) (actual \ntime=10.434..112.508 rows=1672 loops=1)\n -> Index Scan using i_sklad_ids_doc \non a_sklad s (cost=0.00..21.90 rows=4 width=256) (actual \ntime=1.582..1.859 rows=1 loops=8191)\n Index Cond: (s.ids_doc = d.ids)\n -> Hash (cost=19.43..19.43 rows=343 \nwidth=64) (actual time=0.460..0.460 rows=343 loops=1)\n -> Seq Scan on a_location l \n(cost=0.00..19.43 rows=343 width=64) (actual time=0.017..0.248 \nrows=343 loops=1)\n -> Index Scan using i_a_gar_prod_r_ids_a_sklad \non a_gar_prod_r gr (cost=0.00..1.44 rows=1 width=64) (actual \ntime=0.049..0.049 rows=0 loops=9900)\n Index Cond: (gr.ids_a_sklad = s.ids)\n Filter: (gr.sernum <> 'ok'::text)\n -> Index Scan using a_nomen_pkey on a_nomen n \n(cost=0.00..0.27 rows=1 width=128) (actual time=1.548..1.548 rows=1 \nloops=636)\n Index Cond: (n.ids = s.ids_num)\n -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr \n(cost=0.00..0.27 rows=1 width=64) (actual time=0.125..0.126 rows=1 \nloops=636)\n Index Cond: (nmgr.ids = n.ids_grupa)\n -> Index Scan using a_slujiteli_pkey on a_slujiteli slu \n(cost=0.00..0.49 rows=1 width=64) (actual time=0.006..0.006 rows=1 \nloops=636)\n Index Cond: (slu.ids = d.ids_slu_targ)\n Total runtime: 30682.134 ms\n(33 rows)\n\n\n explain select d.ids from a_doc d join a_sklad s on \n(d.ids=s.ids_doc) join a_nomen n on (n.ids=s.ids_num) join a_nom_gr \nnmgr on (nmgr.ids=n.ids_grupa) join a_gar_prod_r gr on \n(gr.ids_a_sklad=s.ids and gr.sernum!='ok') join a_location l on \n(l.ids=s.ids_sklad) join a_klienti kl on (kl.ids=d.ids_ko) left \nouter join a_slujiteli sl on (sl.ids=d.ids_slu_ka) left outer join \na_slujiteli slu on (slu.ids=d.ids_slu_targ) where d.op=1 AND \nd.date_op >= 12320 AND d.date_op <= 12362 and n.num like '191%';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=63.61..133467.00 rows=4 width=64)\n -> Nested Loop (cost=63.61..133433.87 rows=4 width=128)\n -> Nested Loop (cost=63.61..133422.75 rows=4 width=192)\n -> Nested Loop Left Join (cost=63.61..133421.63 \nrows=4 width=256)\n -> Nested Loop (cost=63.61..133420.31 rows=4 width=320)\n -> Nested Loop (cost=63.61..133381.08 \nrows=6 width=384)\n -> Nested Loop \n(cost=63.61..127621.55 rows=2833 width=192)\n -> Nested Loop \n(cost=63.61..107660.43 rows=13716 width=256)\n -> Index Scan using \ni_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128)\n Index Cond: \n(((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n Filter: \n((num)::text ~~ '191%'::text)\n -> Bitmap Heap Scan on \na_sklad s (cost=63.61..4468.84 rows=1173 width=256)\n Recheck Cond: \n(s.ids_num = n.ids)\n -> Bitmap Index \nScan on i_sklad_ids_num (cost=0.00..63.32 rows=1173 width=0)\n Index Cond: \n(s.ids_num = n.ids)\n -> Index Scan using \ni_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1 \nwidth=64)\n Index Cond: \n(gr.ids_a_sklad = s.ids)\n Filter: (gr.sernum <> 'ok'::text)\n -> Index Scan using a_doc_pkey on \na_doc d (cost=0.00..2.02 rows=1 width=256)\n Index Cond: (d.ids = s.ids_doc)\n Filter: ((d.date_op >= 12320) \nAND (d.date_op <= 12362) AND (d.op = 1))\n -> Index Scan using a_klienti_pkey on \na_klienti kl (cost=0.00..6.53 rows=1 width=64)\n Index Cond: (kl.ids = d.ids_ko)\n -> Index Scan using a_slujiteli_pkey on \na_slujiteli sl (cost=0.00..0.32 rows=1 width=64)\n Index Cond: (sl.ids = d.ids_slu_ka)\n -> Index Scan using a_location_pkey on a_location l \n(cost=0.00..0.27 rows=1 width=64)\n Index Cond: (l.ids = s.ids_sklad)\n -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr \n(cost=0.00..2.77 rows=1 width=64)\n Index Cond: (nmgr.ids = n.ids_grupa)\n -> Index Scan using a_slujiteli_pkey on a_slujiteli slu \n(cost=0.00..8.27 rows=1 width=64)\n Index Cond: (slu.ids = d.ids_slu_targ)\n(31 rows)\n\n\nI can not find the reason for this problem.\nIs it bug or configuration problem ?\nI am running the pg on Contos 5.2 8 GB RAM.\n\nRegards, Ivan.\n\n\n\n-------------------------------------\n\nICN.Bg � ���-�������� ���� �� ������� ������ �� ���������� ����� -\n������������� ������� �� 23 �������� �� ��� � ���, 18 GB �����,\n����������� ������ � ��������� ������\n http://www.icn.bg/?referer=MailBg\n\n", "msg_date": "Sun, 13 Sep 2009 10:17:04 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "possible wrong query plan on pg 8.3.5," }, { "msg_contents": "[email protected] writes:\n> I am running a relativ complex query on pg 8.3.5 and have (possible) \n> wrong query plan.\n> ...\n> If I run the query without thle last part : and n.num like '191%' \n> it work ok as speed ~ 30 sec on not very big db.\n> If I run the full query it take very long time to go ( i never waited \n> to the end but it take > 60 min.)\n\nI'm betting that it's badly underestimating the number of rows\nsatisfying the LIKE condition:\n\n> -> Index Scan using \n> i_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128)\n> Index Cond: \n> (((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n> Filter: \n> ((num)::text ~~ '191%'::text)\n\nIs 24 the right number of rows for that, or anywhere close? If not, try\nraising the statistics target for this table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Sep 2009 15:21:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible wrong query plan on pg 8.3.5, " }, { "msg_contents": "����� �� Tom Lane <[email protected]>:\n\n> [email protected] writes:\n>> I am running a relativ complex query on pg 8.3.5 and have (possible)\n>> wrong query plan.\n>> ...\n>> If I run the query without thle last part : and n.num like '191%'\n>> it work ok as speed ~ 30 sec on not very big db.\n>> If I run the full query it take very long time to go ( i never waited\n>> to the end but it take > 60 min.)\n>\n> I'm betting that it's badly underestimating the number of rows\n> satisfying the LIKE condition:\n>\n>> -> Index Scan using\n>> i_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128)\n>> Index Cond:\n>> (((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n>> Filter:\n>> ((num)::text ~~ '191%'::text)\n>\n> Is 24 the right number of rows for that, or anywhere close? If not, try\n> raising the statistics target for this table.\n>\n> \t\t\tregards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nHi Tom,\n\nYes, 24 is relative ok ( the real number is 20).\nAnd the statistic target for the database is 800 at the moment. If \nneedet I can set it to 1000 ( the maximum).\n\nAlso I waited to the end of this query to gather info for explain analyze.\nIt is it:\n\n explain analyze select d.ids from a_doc d join a_sklad s on \n(d.ids=s.ids_doc) join a_nomen n on (n.ids=s.ids_num) join a_nom_gr \nnmgr on (nmgr.ids=n.ids_grupa) join a_gar_prod_r gr on \n(gr.ids_a_sklad=s.ids and gr.sernum!='ok') join a_location l on \n(l.ids=s.ids_sklad) join a_klienti kl on (kl.ids=d.ids_ko) left \nouter join a_slujiteli sl on (sl.ids=d.ids_slu_ka) left outer join \na_slujiteli slu on (slu.ids=d.ids_slu_targ) where d.op=1 AND \nd.date_op >= 12320 AND d.date_op <= 12362 and n.num like '191%';\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=63.64..133732.47 rows=4 width=64) \n(actual time=616059.833..1314396.823 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133699.35 rows=4 width=128) (actual \ntime=616033.205..1313991.756 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133688.22 rows=4 width=192) \n(actual time=616033.194..1313991.058 rows=91 loops=1)\n -> Nested Loop Left Join (cost=63.64..133687.10 \nrows=4 width=256) (actual time=616033.183..1313936.577 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133685.78 rows=4 \nwidth=320) (actual time=616033.177..1313929.258 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133646.56 \nrows=6 width=384) (actual time=616007.069..1313008.701 rows=91 loops=1)\n -> Nested Loop \n(cost=63.64..127886.54 rows=2833 width=192) (actual \ntime=376.309..559763.450 rows=211357 loops=1)\n -> Nested Loop \n(cost=63.64..107934.83 rows=13709 width=256) (actual \ntime=224.058..148475.499 rows=370803 loops=1)\n -> Index Scan using \ni_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128) (actual \ntime=15.702..198.049 rows=20 loops=1)\n Index Cond: \n(((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n Filter: \n((num)::text ~~ '191%'::text)\n -> Bitmap Heap Scan on \na_sklad s (cost=63.64..4480.23 rows=1176 width=256) (actual \ntime=93.223..7398.764 rows=18540 loops=20)\n Recheck Cond: \n(s.ids_num = n.ids)\n -> Bitmap Index \nScan on i_sklad_ids_num (cost=0.00..63.34 rows=1176 width=0) (actual \ntime=78.430..78.430 rows=18540 loops=20)\n Index Cond: \n(s.ids_num = n.ids)\n -> Index Scan using \ni_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1 \nwidth=64) (actual time=1.098..1.108 rows=1 loops=370803)\n Index Cond: \n(gr.ids_a_sklad = s.ids)\n Filter: (gr.sernum <> 'ok'::text)\n -> Index Scan using a_doc_pkey on \na_doc d (cost=0.00..2.02 rows=1 width=256) (actual time=3.563..3.563 \nrows=0 loops=211357)\n Index Cond: (d.ids = s.ids_doc)\n Filter: ((d.date_op >= 12320) \nAND (d.date_op <= 12362) AND (d.op = 1))\n -> Index Scan using a_klienti_pkey on \na_klienti kl (cost=0.00..6.53 rows=1 width=64) (actual \ntime=10.109..10.113 rows=1 loops=91)\n Index Cond: (kl.ids = d.ids_ko)\n -> Index Scan using a_slujiteli_pkey on \na_slujiteli sl (cost=0.00..0.32 rows=1 width=64) (actual \ntime=0.078..0.078 rows=0 loops=91)\n Index Cond: (sl.ids = d.ids_slu_ka)\n -> Index Scan using a_location_pkey on a_location l \n(cost=0.00..0.27 rows=1 width=64) (actual time=0.596..0.597 rows=1 \nloops=91)\n Index Cond: (l.ids = s.ids_sklad)\n -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr \n(cost=0.00..2.77 rows=1 width=64) (actual time=0.005..0.006 rows=1 \nloops=91)\n Index Cond: (nmgr.ids = n.ids_grupa)\n -> Index Scan using a_slujiteli_pkey on a_slujiteli slu \n(cost=0.00..8.27 rows=1 width=64) (actual time=4.448..4.449 rows=1 \nloops=91)\n Index Cond: (slu.ids = d.ids_slu_targ)\n Total runtime: 1314397.153 ms\n(32 rows)\n\n\nAnd if I try this query for second time it is working very fast:\n\n\n-----------------------------------------\n Nested Loop Left Join (cost=63.64..133732.47 rows=4 width=64) \n(actual time=9438.195..29429.861 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133699.35 rows=4 width=128) (actual \ntime=9438.155..29363.045 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133688.22 rows=4 width=192) \n(actual time=9438.145..29355.229 rows=91 loops=1)\n -> Nested Loop Left Join (cost=63.64..133687.10 \nrows=4 width=256) (actual time=9438.132..29335.008 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133685.78 rows=4 \nwidth=320) (actual time=9438.128..29314.640 rows=91 loops=1)\n -> Nested Loop (cost=63.64..133646.56 \nrows=6 width=384) (actual time=9438.087..29312.490 rows=91 loops=1)\n -> Nested Loop \n(cost=63.64..127886.54 rows=2833 width=192) (actual \ntime=192.451..21060.439 rows=211357 loops=1)\n -> Nested Loop \n(cost=63.64..107934.83 rows=13709 width=256) (actual \ntime=192.367..11591.661 rows=370803 loops=1)\n -> Index Scan using \ni_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128) (actual \ntime=0.045..0.434 rows=20 loops=1)\n Index Cond: \n(((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n Filter: \n((num)::text ~~ '191%'::text)\n -> Bitmap Heap Scan on \na_sklad s (cost=63.64..4480.23 rows=1176 width=256) (actual \ntime=14.333..565.417 rows=18540 loops=20)\n Recheck Cond: \n(s.ids_num = n.ids)\n -> Bitmap Index \nScan on i_sklad_ids_num (cost=0.00..63.34 rows=1176 width=0) (actual \ntime=9.164..9.164 rows=18540 loops=20)\n Index Cond: \n(s.ids_num = n.ids)\n -> Index Scan using \ni_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1 \nwidth=64) (actual time=0.024..0.024 rows=1 loops=370803)\n Index Cond: \n(gr.ids_a_sklad = s.ids)\n Filter: (gr.sernum <> 'ok'::text)\n -> Index Scan using a_doc_pkey on \na_doc d (cost=0.00..2.02 rows=1 width=256) (actual time=0.038..0.038 \nrows=0 loops=211357)\n Index Cond: (d.ids = s.ids_doc)\n Filter: ((d.date_op >= 12320) \nAND (d.date_op <= 12362) AND (d.op = 1))\n -> Index Scan using a_klienti_pkey on \na_klienti kl (cost=0.00..6.53 rows=1 width=64) (actual \ntime=0.021..0.022 rows=1 loops=91)\n Index Cond: (kl.ids = d.ids_ko)\n -> Index Scan using a_slujiteli_pkey on \na_slujiteli sl (cost=0.00..0.32 rows=1 width=64) (actual \ntime=0.222..0.222 rows=0 loops=91)\n Index Cond: (sl.ids = d.ids_slu_ka)\n -> Index Scan using a_location_pkey on a_location l \n(cost=0.00..0.27 rows=1 width=64) (actual time=0.220..0.220 rows=1 \nloops=91)\n Index Cond: (l.ids = s.ids_sklad)\n -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr \n(cost=0.00..2.77 rows=1 width=64) (actual time=0.083..0.084 rows=1 \nloops=91)\n Index Cond: (nmgr.ids = n.ids_grupa)\n -> Index Scan using a_slujiteli_pkey on a_slujiteli slu \n(cost=0.00..8.27 rows=1 width=64) (actual time=0.731..0.732 rows=1 \nloops=91)\n Index Cond: (slu.ids = d.ids_slu_targ)\n Total runtime: 29430.170 ms\n\n\n\nAfter this I wait a little time ( ~30 min) and all works bad again.\nI think it is related to cache or not ?\n\nCan I disable using index of n.num field for this query onli ( I know \nit is wrong direction, but I have no idea how to solve this situaion) ?\n\nRegards,\nIvan.\n\n\n\n\n\n-------------------------------------\n\n3.5 Mbps ��������� ������ �� ��������\n��������� � ��������\nwww.tooway.bg\n http://www.tooway.bg/\n\n", "msg_date": "Mon, 14 Sep 2009 09:16:32 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "> Hi Tom,\n>\n> Yes, 24 is relative ok ( the real number is 20).\n> And the statistic target for the database is 800 at the moment. If\n> needet I can set it to 1000 ( the maximum).\n>\n> Also I waited to the end of this query to gather info for explain analyze.\n> It is it:\n>\n> explain analyze select d.ids from a_doc d join a_sklad s on\n> (d.ids=s.ids_doc) join a_nomen n on (n.ids=s.ids_num) join a_nom_gr\n> nmgr on (nmgr.ids=n.ids_grupa) join a_gar_prod_r gr on\n> (gr.ids_a_sklad=s.ids and gr.sernum!='ok') join a_location l on\n> (l.ids=s.ids_sklad) join a_klienti kl on (kl.ids=d.ids_ko) left\n> outer join a_slujiteli sl on (sl.ids=d.ids_slu_ka) left outer join\n> a_slujiteli slu on (slu.ids=d.ids_slu_targ) where d.op=1 AND\n> d.date_op >= 12320 AND d.date_op <= 12362 and n.num like '191%';\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=63.64..133732.47 rows=4 width=64)\n> (actual time=616059.833..1314396.823 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133699.35 rows=4 width=128) (actual\n> time=616033.205..1313991.756 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133688.22 rows=4 width=192)\n> (actual time=616033.194..1313991.058 rows=91 loops=1)\n> -> Nested Loop Left Join (cost=63.64..133687.10\n> rows=4 width=256) (actual time=616033.183..1313936.577 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133685.78 rows=4\n> width=320) (actual time=616033.177..1313929.258 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133646.56\n> rows=6 width=384) (actual time=616007.069..1313008.701 rows=91 loops=1)\n> -> Nested Loop\n> (cost=63.64..127886.54 rows=2833 width=192) (actual\n> time=376.309..559763.450 rows=211357 loops=1)\n> -> Nested Loop\n> (cost=63.64..107934.83 rows=13709 width=256) (actual\n> time=224.058..148475.499 rows=370803 loops=1)\n> -> Index Scan using\n> i_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128) (actual\n> time=15.702..198.049 rows=20 loops=1)\n> Index Cond:\n> (((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n> Filter:\n> ((num)::text ~~ '191%'::text)\n> -> Bitmap Heap Scan on\n> a_sklad s (cost=63.64..4480.23 rows=1176 width=256) (actual\n> time=93.223..7398.764 rows=18540 loops=20)\n> Recheck Cond:\n> (s.ids_num = n.ids)\n> -> Bitmap Index\n> Scan on i_sklad_ids_num (cost=0.00..63.34 rows=1176 width=0) (actual\n> time=78.430..78.430 rows=18540 loops=20)\n> Index Cond:\n> (s.ids_num = n.ids)\n> -> Index Scan using\n> i_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1\n> width=64) (actual time=1.098..1.108 rows=1 loops=370803)\n> Index Cond:\n> (gr.ids_a_sklad = s.ids)\n> Filter: (gr.sernum <>\n> 'ok'::text)\n> -> Index Scan using a_doc_pkey on\n> a_doc d (cost=0.00..2.02 rows=1 width=256) (actual time=3.563..3.563\n> rows=0 loops=211357)\n> Index Cond: (d.ids = s.ids_doc)\n> Filter: ((d.date_op >= 12320)\n> AND (d.date_op <= 12362) AND (d.op = 1))\n> -> Index Scan using a_klienti_pkey on\n> a_klienti kl (cost=0.00..6.53 rows=1 width=64) (actual\n> time=10.109..10.113 rows=1 loops=91)\n> Index Cond: (kl.ids = d.ids_ko)\n> -> Index Scan using a_slujiteli_pkey on\n> a_slujiteli sl (cost=0.00..0.32 rows=1 width=64) (actual\n> time=0.078..0.078 rows=0 loops=91)\n> Index Cond: (sl.ids = d.ids_slu_ka)\n> -> Index Scan using a_location_pkey on a_location l\n> (cost=0.00..0.27 rows=1 width=64) (actual time=0.596..0.597 rows=1\n> loops=91)\n> Index Cond: (l.ids = s.ids_sklad)\n> -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr\n> (cost=0.00..2.77 rows=1 width=64) (actual time=0.005..0.006 rows=1\n> loops=91)\n> Index Cond: (nmgr.ids = n.ids_grupa)\n> -> Index Scan using a_slujiteli_pkey on a_slujiteli slu\n> (cost=0.00..8.27 rows=1 width=64) (actual time=4.448..4.449 rows=1\n> loops=91)\n> Index Cond: (slu.ids = d.ids_slu_targ)\n> Total runtime: 1314397.153 ms\n> (32 rows)\n>\n>\n> And if I try this query for second time it is working very fast:\n>\n>\n> -----------------------------------------\n> Nested Loop Left Join (cost=63.64..133732.47 rows=4 width=64)\n> (actual time=9438.195..29429.861 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133699.35 rows=4 width=128) (actual\n> time=9438.155..29363.045 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133688.22 rows=4 width=192)\n> (actual time=9438.145..29355.229 rows=91 loops=1)\n> -> Nested Loop Left Join (cost=63.64..133687.10\n> rows=4 width=256) (actual time=9438.132..29335.008 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133685.78 rows=4\n> width=320) (actual time=9438.128..29314.640 rows=91 loops=1)\n> -> Nested Loop (cost=63.64..133646.56\n> rows=6 width=384) (actual time=9438.087..29312.490 rows=91 loops=1)\n> -> Nested Loop\n> (cost=63.64..127886.54 rows=2833 width=192) (actual\n> time=192.451..21060.439 rows=211357 loops=1)\n> -> Nested Loop\n> (cost=63.64..107934.83 rows=13709 width=256) (actual\n> time=192.367..11591.661 rows=370803 loops=1)\n> -> Index Scan using\n> i_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128) (actual\n> time=0.045..0.434 rows=20 loops=1)\n> Index Cond:\n> (((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n> Filter:\n> ((num)::text ~~ '191%'::text)\n> -> Bitmap Heap Scan on\n> a_sklad s (cost=63.64..4480.23 rows=1176 width=256) (actual\n> time=14.333..565.417 rows=18540 loops=20)\n> Recheck Cond:\n> (s.ids_num = n.ids)\n> -> Bitmap Index\n> Scan on i_sklad_ids_num (cost=0.00..63.34 rows=1176 width=0) (actual\n> time=9.164..9.164 rows=18540 loops=20)\n> Index Cond:\n> (s.ids_num = n.ids)\n> -> Index Scan using\n> i_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1\n> width=64) (actual time=0.024..0.024 rows=1 loops=370803)\n> Index Cond:\n> (gr.ids_a_sklad = s.ids)\n> Filter: (gr.sernum <>\n> 'ok'::text)\n> -> Index Scan using a_doc_pkey on\n> a_doc d (cost=0.00..2.02 rows=1 width=256) (actual time=0.038..0.038\n> rows=0 loops=211357)\n> Index Cond: (d.ids = s.ids_doc)\n> Filter: ((d.date_op >= 12320)\n> AND (d.date_op <= 12362) AND (d.op = 1))\n> -> Index Scan using a_klienti_pkey on\n> a_klienti kl (cost=0.00..6.53 rows=1 width=64) (actual\n> time=0.021..0.022 rows=1 loops=91)\n> Index Cond: (kl.ids = d.ids_ko)\n> -> Index Scan using a_slujiteli_pkey on\n> a_slujiteli sl (cost=0.00..0.32 rows=1 width=64) (actual\n> time=0.222..0.222 rows=0 loops=91)\n> Index Cond: (sl.ids = d.ids_slu_ka)\n> -> Index Scan using a_location_pkey on a_location l\n> (cost=0.00..0.27 rows=1 width=64) (actual time=0.220..0.220 rows=1\n> loops=91)\n> Index Cond: (l.ids = s.ids_sklad)\n> -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr\n> (cost=0.00..2.77 rows=1 width=64) (actual time=0.083..0.084 rows=1\n> loops=91)\n> Index Cond: (nmgr.ids = n.ids_grupa)\n> -> Index Scan using a_slujiteli_pkey on a_slujiteli slu\n> (cost=0.00..8.27 rows=1 width=64) (actual time=0.731..0.732 rows=1\n> loops=91)\n> Index Cond: (slu.ids = d.ids_slu_targ)\n> Total runtime: 29430.170 ms\n>\n>\n>\n> After this I wait a little time ( ~30 min) and all works bad again.\n> I think it is related to cache or not ?\n>\n> Can I disable using index of n.num field for this query onli ( I know\n> it is wrong direction, but I have no idea how to solve this situaion) ?\n>\n> Regards,\n> Ivan.\n\nI have entered the two execution plans to explain.depesz.com - the results\nare here:\n\nslow (first execution): http://explain.depesz.com/s/tvd\nfast (second execution): http://explain.depesz.com/s/3C\n\nIt seems there's something very wrong - the plans are \"equal\" but in the\nfirst case the results (actual time) are multiplied by 100. Eithere there\nis some sort of cache (so the second execution is much faster), or the\nsystem was busy during the first execution, or there is something wrong\nwith the hardware.\n\nregards\nTom (not Lane)\n\n", "msg_date": "Mon, 14 Sep 2009 11:32:45 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "Hi ,\n\n\n\n>> Hi Tom,\n>>\n>> Yes, 24 is relative ok ( the real number is 20).\n>> And the statistic target for the database is 800 at the moment. If\n>> needet I can set it to 1000 ( the maximum).\n>>\n>> Also I waited to the end of this query to gather info for explain analyze.\n>> It is it:\n>>\n>> explain analyze select d.ids from a_doc d join a_sklad s on\n>> (d.ids=s.ids_doc) join a_nomen n on (n.ids=s.ids_num) join a_nom_gr\n>> nmgr on (nmgr.ids=n.ids_grupa) join a_gar_prod_r gr on\n>> (gr.ids_a_sklad=s.ids and gr.sernum!='ok') join a_location l on\n>> (l.ids=s.ids_sklad) join a_klienti kl on (kl.ids=d.ids_ko) left\n>> outer join a_slujiteli sl on (sl.ids=d.ids_slu_ka) left outer join\n>> a_slujiteli slu on (slu.ids=d.ids_slu_targ) where d.op=1 AND\n>> d.date_op >= 12320 AND d.date_op <= 12362 and n.num like '191%';\n>>\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Nested Loop Left Join (cost=63.64..133732.47 rows=4 width=64)\n>> (actual time=616059.833..1314396.823 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133699.35 rows=4 width=128) (actual\n>> time=616033.205..1313991.756 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133688.22 rows=4 width=192)\n>> (actual time=616033.194..1313991.058 rows=91 loops=1)\n>> -> Nested Loop Left Join (cost=63.64..133687.10\n>> rows=4 width=256) (actual time=616033.183..1313936.577 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133685.78 rows=4\n>> width=320) (actual time=616033.177..1313929.258 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133646.56\n>> rows=6 width=384) (actual time=616007.069..1313008.701 rows=91 loops=1)\n>> -> Nested Loop\n>> (cost=63.64..127886.54 rows=2833 width=192) (actual\n>> time=376.309..559763.450 rows=211357 loops=1)\n>> -> Nested Loop\n>> (cost=63.64..107934.83 rows=13709 width=256) (actual\n>> time=224.058..148475.499 rows=370803 loops=1)\n>> -> Index Scan using\n>> i_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128) (actual\n>> time=15.702..198.049 rows=20 loops=1)\n>> Index Cond:\n>> (((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n>> Filter:\n>> ((num)::text ~~ '191%'::text)\n>> -> Bitmap Heap Scan on\n>> a_sklad s (cost=63.64..4480.23 rows=1176 width=256) (actual\n>> time=93.223..7398.764 rows=18540 loops=20)\n>> Recheck Cond:\n>> (s.ids_num = n.ids)\n>> -> Bitmap Index\n>> Scan on i_sklad_ids_num (cost=0.00..63.34 rows=1176 width=0) (actual\n>> time=78.430..78.430 rows=18540 loops=20)\n>> Index Cond:\n>> (s.ids_num = n.ids)\n>> -> Index Scan using\n>> i_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1\n>> width=64) (actual time=1.098..1.108 rows=1 loops=370803)\n>> Index Cond:\n>> (gr.ids_a_sklad = s.ids)\n>> Filter: (gr.sernum <>\n>> 'ok'::text)\n>> -> Index Scan using a_doc_pkey on\n>> a_doc d (cost=0.00..2.02 rows=1 width=256) (actual time=3.563...3.563\n>> rows=0 loops=211357)\n>> Index Cond: (d.ids = s.ids_doc)\n>> Filter: ((d.date_op >= 12320)\n>> AND (d.date_op <= 12362) AND (d.op = 1))\n>> -> Index Scan using a_klienti_pkey on\n>> a_klienti kl (cost=0.00..6.53 rows=1 width=64) (actual\n>> time=10.109..10.113 rows=1 loops=91)\n>> Index Cond: (kl.ids = d.ids_ko)\n>> -> Index Scan using a_slujiteli_pkey on\n>> a_slujiteli sl (cost=0.00..0.32 rows=1 width=64) (actual\n>> time=0.078..0.078 rows=0 loops=91)\n>> Index Cond: (sl.ids = d.ids_slu_ka)\n>> -> Index Scan using a_location_pkey on a_location l\n>> (cost=0.00..0.27 rows=1 width=64) (actual time=0.596..0.597 rows=1\n>> loops=91)\n>> Index Cond: (l.ids = s.ids_sklad)\n>> -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr\n>> (cost=0.00..2.77 rows=1 width=64) (actual time=0.005..0.006 rows=1\n>> loops=91)\n>> Index Cond: (nmgr.ids = n.ids_grupa)\n>> -> Index Scan using a_slujiteli_pkey on a_slujiteli slu\n>> (cost=0.00..8.27 rows=1 width=64) (actual time=4.448..4.449 rows=1\n>> loops=91)\n>> Index Cond: (slu.ids = d.ids_slu_targ)\n>> Total runtime: 1314397.153 ms\n>> (32 rows)\n>>\n>>\n>> And if I try this query for second time it is working very fast:\n>>\n>>\n>> -----------------------------------------\n>> Nested Loop Left Join (cost=63.64..133732.47 rows=4 width=64)\n>> (actual time=9438.195..29429.861 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133699.35 rows=4 width=128) (actual\n>> time=9438.155..29363.045 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133688.22 rows=4 width=192)\n>> (actual time=9438.145..29355.229 rows=91 loops=1)\n>> -> Nested Loop Left Join (cost=63.64..133687.10\n>> rows=4 width=256) (actual time=9438.132..29335.008 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133685.78 rows=4\n>> width=320) (actual time=9438.128..29314.640 rows=91 loops=1)\n>> -> Nested Loop (cost=63.64..133646.56\n>> rows=6 width=384) (actual time=9438.087..29312.490 rows=91 loops=1)\n>> -> Nested Loop\n>> (cost=63.64..127886.54 rows=2833 width=192) (actual\n>> time=192.451..21060.439 rows=211357 loops=1)\n>> -> Nested Loop\n>> (cost=63.64..107934.83 rows=13709 width=256) (actual\n>> time=192.367..11591.661 rows=370803 loops=1)\n>> -> Index Scan using\n>> i_nomen_num on a_nomen n (cost=0.00..56.39 rows=24 width=128) (actual\n>> time=0.045..0.434 rows=20 loops=1)\n>> Index Cond:\n>> (((num)::text >= '191'::text) AND ((num)::text < '192'::text))\n>> Filter:\n>> ((num)::text ~~ '191%'::text)\n>> -> Bitmap Heap Scan on\n>> a_sklad s (cost=63.64..4480.23 rows=1176 width=256) (actual\n>> time=14.333..565.417 rows=18540 loops=20)\n>> Recheck Cond:\n>> (s.ids_num = n.ids)\n>> -> Bitmap Index\n>> Scan on i_sklad_ids_num (cost=0.00..63.34 rows=1176 width=0) (actual\n>> time=9.164..9.164 rows=18540 loops=20)\n>> Index Cond:\n>> (s.ids_num = n.ids)\n>> -> Index Scan using\n>> i_a_gar_prod_r_ids_a_sklad on a_gar_prod_r gr (cost=0.00..1.44 rows=1\n>> width=64) (actual time=0.024..0.024 rows=1 loops=370803)\n>> Index Cond:\n>> (gr.ids_a_sklad = s.ids)\n>> Filter: (gr.sernum <>\n>> 'ok'::text)\n>> -> Index Scan using a_doc_pkey on\n>> a_doc d (cost=0.00..2.02 rows=1 width=256) (actual time=0.038...0.038\n>> rows=0 loops=211357)\n>> Index Cond: (d.ids = s.ids_doc)\n>> Filter: ((d.date_op >= 12320)\n>> AND (d.date_op <= 12362) AND (d.op = 1))\n>> -> Index Scan using a_klienti_pkey on\n>> a_klienti kl (cost=0.00..6.53 rows=1 width=64) (actual\n>> time=0.021..0.022 rows=1 loops=91)\n>> Index Cond: (kl.ids = d.ids_ko)\n>> -> Index Scan using a_slujiteli_pkey on\n>> a_slujiteli sl (cost=0.00..0.32 rows=1 width=64) (actual\n>> time=0.222..0.222 rows=0 loops=91)\n>> Index Cond: (sl.ids = d.ids_slu_ka)\n>> -> Index Scan using a_location_pkey on a_location l\n>> (cost=0.00..0.27 rows=1 width=64) (actual time=0.220..0.220 rows=1\n>> loops=91)\n>> Index Cond: (l.ids = s.ids_sklad)\n>> -> Index Scan using a_nom_gr_pkey on a_nom_gr nmgr\n>> (cost=0.00..2.77 rows=1 width=64) (actual time=0.083..0.084 rows=1\n>> loops=91)\n>> Index Cond: (nmgr.ids = n.ids_grupa)\n>> -> Index Scan using a_slujiteli_pkey on a_slujiteli slu\n>> (cost=0.00..8.27 rows=1 width=64) (actual time=0.731..0.732 rows=1\n>> loops=91)\n>> Index Cond: (slu.ids = d.ids_slu_targ)\n>> Total runtime: 29430.170 ms\n>>\n>>\n>>\n>> After this I wait a little time ( ~30 min) and all works bad again.\n>> I think it is related to cache or not ?\n>>\n>> Can I disable using index of n.num field for this query onli ( I know\n>> it is wrong direction, but I have no idea how to solve this situaion) ?\n>>\n>> Regards,\n>> Ivan.\n>\n> I have entered the two execution plans to explain.depesz.com - the results\n> are here:\n>\n> slow (first execution): http://explain.depesz.com/s/tvd\n> fast (second execution): http://explain.depesz.com/s/3C\n>\n> It seems there's something very wrong - the plans are \"equal\" but in the\n> first case the results (actual time) are multiplied by 100. Eithere there\n> is some sort of cache (so the second execution is much faster), or the\n> system was busy during the first execution, or there is something wrong\n> with the hardware.\n>\nYes, I think it is the cache.\nI am running the first and the second query on my test server and no \none is working on it at this time.\nMy general problem is tath the query without and n.num like '191%' \nis much faster.\nI detect the same also with many other field on the n-table.\n\nI think planer expect to find all the records in cache but it is not there.\nCan I tune some parameters in postgres.conf (effective_cache_size for \nexample) ?\n\n> regards\n> Tom (not Lane)\n>\n>\n>\nregards, Ivan.\n\n\n-------------------------------------\nPowered by Mail.BG - http://mail.bg\n\n", "msg_date": "Mon, 14 Sep 2009 13:49:39 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "2009/9/14 <[email protected]>:\n> It seems there's something very wrong - the plans are \"equal\" but in the\n> first case the results (actual time) are multiplied by 100. Eithere there\n> is some sort of cache (so the second execution is much faster), or the\n> system was busy during the first execution, or there is something wrong\n> with the hardware.\n\nI think you should run this query more than twice. If it's slow the\nfirst time and fast every time for many executions after that, then\nit's probably just the data getting loaded into the OS cache (or\nshared buffers). If it's bouncing back and forth between fast and\nslow, you might want to check whether your machine is swapping.\n\nIt might also be helpful to post all the uncommented settings from\nyour postgresql.conf file.\n\n...Robert\n", "msg_date": "Mon, 14 Sep 2009 09:38:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "����� �� Robert Haas <[email protected]>:\n\n> 2009/9/14 <[email protected]>:\n>> It seems there's something very wrong - the plans are \"equal\" but in the\n>> first case the results (actual time) are multiplied by 100. Eithere there\n>> is some sort of cache (so the second execution is much faster), or the\n>> system was busy during the first execution, or there is something wrong\n>> with the hardware.\n>\n> I think you should run this query more than twice. If it's slow the\n> first time and fast every time for many executions after that, then\n> it's probably just the data getting loaded into the OS cache (or\n> shared buffers). If it's bouncing back and forth between fast and\n> slow, you might want to check whether your machine is swapping.\n\nI did it many times. Alter the first atempt it works fast, but after a \ncouple of minutes ( I think after changing the data in cache) the \nquery is working also very slow.\n\nI do not see any swap on OS.\n\n>\n> It might also be helpful to post all the uncommented settings from\n> your postgresql.conf file.\n\npostgresql.conf :\n\nmax_connections = 2000\nshared_buffers = 1800MB\ntemp_buffers = 80MB\nwork_mem = 120MB\nmaintenance_work_mem = 100MB\nmax_fsm_pages = 404800\nmax_fsm_relations = 5000\n\nmax_files_per_process = 2000\nwal_buffers = 64MB\ncheckpoint_segments = 30\neffective_cache_size = 5000MB\ndefault_statistics_target = 800\n\nAll the rest are default parameters.\n\nIvan.\n\n\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n-------------------------------------\nPowered by Mail.BG - http://mail.bg\n\n", "msg_date": "Mon, 14 Sep 2009 17:17:15 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "2009/9/14 <[email protected]>:\n> Цитат от Robert Haas <[email protected]>:\n>\n>> 2009/9/14  <[email protected]>:\n>>>\n>>> It seems there's something very wrong - the plans are \"equal\" but in the\n>>> first case the results (actual time) are multiplied by 100. Eithere there\n>>> is some sort of cache (so the second execution is much faster), or the\n>>> system was busy during the first execution, or there is something wrong\n>>> with the hardware.\n>>\n>> I think you should run this query more than twice.  If it's slow the\n>> first time and fast every time for many executions after that, then\n>> it's probably just the data getting loaded into the OS cache (or\n>> shared buffers).  If it's bouncing back and forth between fast and\n>> slow, you might want to check whether your machine is swapping.\n>\n> I did it many times. Alter the first atempt it works fast, but after a\n> couple of minutes ( I think after changing the data in cache) the query is\n> working also very slow.\n>\n> I do not see any swap on OS.\n>\n>>\n>> It might also be helpful to post all the uncommented settings from\n>> your postgresql.conf file.\n>\n> postgresql.conf :\n>\n> max_connections = 2000\n> shared_buffers = 1800MB\n> temp_buffers = 80MB\n> work_mem = 120MB\n>\n> maintenance_work_mem = 100MB\n> max_fsm_pages = 404800\n> max_fsm_relations = 5000\n>\n> max_files_per_process = 2000\n> wal_buffers = 64MB\n> checkpoint_segments = 30\n> effective_cache_size = 5000MB\n> default_statistics_target = 800\n\nI think you're exhausting the physical memory on your machine. How\nmuch RAM do you have? How many active connections at one time? 120MB\nis a HUGE value for work_mem. I would try reducing that to, say, 4\nMB, and see what happens. Your setting for temp_buffers also seems\nway too high. I would put that one back to the default, at least for\nstarters. And for that matter, why have you increased the value for\nwal_buffers to over 1000 times the default value?\n\nThe reason you may not be seeing evidence of swapping is that it may\nbe happening quite briefly during query execution. But I have to\nthink it's happening, because otherwise the performance drop-off is\nhard to account for.\n\n...Robert\n", "msg_date": "Mon, 14 Sep 2009 11:30:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "2009/9/14 <[email protected]>:\n> Also I waited to the end of this query to gather info for explain analyze.\n> It is it:\n>\n>  explain analyze  select d.ids from a_doc d  join a_sklad s on\n> (d.ids=s.ids_doc)  join a_nomen n on (n.ids=s.ids_num)  join a_nom_gr nmgr\n> on (nmgr.ids=n.ids_grupa)  join a_gar_prod_r gr on (gr.ids_a_sklad=s.ids and\n> gr.sernum!='ok')  join a_location l on (l.ids=s.ids_sklad)  join a_klienti\n> kl on (kl.ids=d.ids_ko)  left outer join a_slujiteli sl on\n> (sl.ids=d.ids_slu_ka)  left outer join a_slujiteli slu on\n> (slu.ids=d.ids_slu_targ)  where d.op=1  AND d.date_op >= 12320 AND d.date_op\n> <= 12362 and n.num like '191%';\n>\n>             QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Nested Loop Left Join  (cost=63.64..133732.47 rows=4 width=64) (actual\n> time=616059.833..1314396.823 rows=91 loops=1)\n>   ->  Nested Loop  (cost=63.64..133699.35 rows=4 width=128) (actual\n> time=616033.205..1313991.756 rows=91 loops=1)\n>         ->  Nested Loop  (cost=63.64..133688.22 rows=4 width=192) (actual\n> time=616033.194..1313991.058 rows=91 loops=1)\n>               ->  Nested Loop Left Join  (cost=63.64..133687.10 rows=4\n> width=256) (actual time=616033.183..1313936.577 rows=91 loops=1)\n>                     ->  Nested Loop  (cost=63.64..133685.78 rows=4\n> width=320) (actual time=616033.177..1313929.258 rows=91 loops=1)\n>                           ->  Nested Loop  (cost=63.64..133646.56 rows=6\n> width=384) (actual time=616007.069..1313008.701 rows=91 loops=1)\n>                                 ->  Nested Loop  (cost=63.64..127886.54\n> rows=2833 width=192) (actual time=376.309..559763.450 rows=211357 loops=1)\n>                                       ->  Nested Loop\n>  (cost=63.64..107934.83 rows=13709 width=256) (actual\n> time=224.058..148475.499 rows=370803 loops=1)\n>                                             ->  Index Scan using i_nomen_num\n\nThis nested loop looks like the big problem, although it could also be\nthat it's running an index scan earlier that should be a seq scan\ngiven the amount the estimate is off on rows.\n\nFor grins, try running your query after issuing this command:\n\nset enable_nestloop=off;\n\nand see what the run time looks like.\n", "msg_date": "Mon, 14 Sep 2009 09:51:39 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "����� �� Scott Marlowe <[email protected]>:\n\n> 2009/9/14 <[email protected]>:\n>> Also I waited to the end of this query to gather info for explain analyze..\n>> It is it:\n>>\n>> �explain analyze �select d.ids from a_doc d �join a_sklad s on\n>> (d.ids=s.ids_doc) �join a_nomen n on (n.ids=s.ids_num) �join a_nom_gr nmgr\n>> on (nmgr.ids=n.ids_grupa) �join a_gar_prod_r gr on (gr.ids_a_sklad=s.ids and\n>> gr.sernum!='ok') �join a_location l on (l.ids=s.ids_sklad) �join a_klienti\n>> kl on (kl.ids=d.ids_ko) �left outer join a_slujiteli sl on\n>> (sl.ids=d.ids_slu_ka) �left outer join a_slujiteli slu on\n>> (slu.ids=d.ids_slu_targ) �where d.op=1 �AND d.date_op >= 12320 AND d.date_op\n>> <= 12362 and n.num like '191%';\n>>\n>> � � � � � � QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> �Nested Loop Left Join �(cost=63.64..133732.47 rows=4 width=64) (actual\n>> time=616059.833..1314396.823 rows=91 loops=1)\n>> � -> �Nested Loop �(cost=63.64..133699.35 rows=4 width=128) (actual\n>> time=616033.205..1313991.756 rows=91 loops=1)\n>> � � � � -> �Nested Loop �(cost=63.64..133688.22 rows=4 width=192) (actual\n>> time=616033.194..1313991.058 rows=91 loops=1)\n>> � � � � � � � -> �Nested Loop Left Join �(cost=63.64...133687.10 rows=4\n>> width=256) (actual time=616033.183..1313936.577 rows=91 loops=1)\n>> � � � � � � � � � � -> �Nested Loop �(cost=63.64..133685.78 rows=4\n>> width=320) (actual time=616033.177..1313929.258 rows=91 loops=1)\n>> � � � � � � � � � � � � � -> �Nested Loop �(cost=63.64..133646.56 rows=6\n>> width=384) (actual time=616007.069..1313008.701 rows=91 loops=1)\n>> � � � � � � � � � � � � � � � � -> �Nested Loop �(cost=63.64..127886.54\n>> rows=2833 width=192) (actual time=376.309..559763.450 rows=211357 loops=1)\n>> � � � � � � � � � � � � � � � � � � � -> �Nested Loop\n>> �(cost=63.64..107934.83 rows=13709 width=256) (actual\n>> time=224.058..148475.499 rows=370803 loops=1)\n>> � � � � � � � � � � � � � � � � � � � � � � -> �Index Scan using i_nomen_num\n>\n> This nested loop looks like the big problem, although it could also be\n> that it's running an index scan earlier that should be a seq scan\n> given the amount the estimate is off on rows.\n>\n> For grins, try running your query after issuing this command:\n>\n> set enable_nestloop=off;\n>\n> and see what the run time looks like.\n>\n>\n\nHi Scott,\n\nalter set enable_nestloop=off, it is new new plan ( and the speed is \nrelative good):\n\n Hash Left Join (cost=647541.56..804574.64 rows=4 width=64) (actual \ntime=40535.547..40554.502 rows=91 loops=1)\n Hash Cond: (d.ids_slu_targ = slu.ids)\n -> Hash Join (cost=647442.94..804475.96 rows=4 width=128) \n(actual time=40533.886..40552.729 rows=91 loops=1)\n Hash Cond: (n.ids_grupa = nmgr.ids)\n -> Hash Join (cost=647425.37..804458.34 rows=4 width=192) \n(actual time=40533.354..40552.112 rows=91 loops=1)\n Hash Cond: (s.ids_sklad = l.ids)\n -> Hash Left Join (cost=647401.65..804434.56 rows=4 \nwidth=256) (actual time=40532.880..40551.540 rows=91 loops=1)\n Hash Cond: (d.ids_slu_ka = sl.ids)\n -> Hash Join (cost=647303.03..804335.91 rows=4 \nwidth=320) (actual time=40530.704..40549.279 rows=91 loops=1)\n Hash Cond: (d.ids_ko = kl.ids)\n -> Hash Join (cost=592217.17..749249.95 \nrows=6 width=384) (actual time=37874.787..37893.110 rows=91 loops=1)\n Hash Cond: (gr.ids_a_sklad = s.ids)\n -> Seq Scan on a_gar_prod_r gr \n(cost=0.00..152866.95 rows=1110870 width=64) (actual \ntime=8.596..5839.771 rows=1112081 loops=1)\n Filter: (sernum <> 'ok'::text)\n -> Hash (cost=592216.84..592216.84 \nrows=27 width=448) (actual time=31275.699..31275.699 rows=193 loops=1)\n -> Hash Join \n(cost=37061.98..592216.84 rows=27 width=448) (actual \ntime=6046.588..31275.047 rows=193 loops=1)\n Hash Cond: (s.ids_doc = d.ids)\n -> Hash Join \n(cost=52.77..555070.26 rows=13709 width=256) (actual \ntime=19.962..30406.478 rows=370803 loops=1)\n Hash Cond: \n(s.ids_num = n.ids)\n -> Seq Scan on \na_sklad s (cost=0.00..534721.93 rows=5375593 width=256) (actual \ntime=5.867..27962.054 rows=5375690 loops=1)\n -> Hash \n(cost=52.47..52.47 rows=24 width=128) (actual time=0.299..0.299 \nrows=20 loops=1)\n -> Bitmap \nHeap Scan on a_nomen n (cost=4.39..52.47 rows=24 width=128) (actual \ntime=0.061..0.276 rows=20 loops=1)\n \nFilter: ((num)::text ~~ '191%'::text)\n -> \nBitmap Index Scan on i_nomen_num (cost=0.00..4.38 rows=13 width=0) \n(actual time=0.043..0.043 rows=20 loops=1)\n \nIndex Cond: (((num)::text >= '191'::text) AND ((num)::text < \n'192'::text))\n -> Hash \n(cost=36926.74..36926.74 rows=6598 width=256) (actual \ntime=485.920..485.920 rows=8191 loops=1)\n -> Bitmap Heap \nScan on a_doc d (cost=223.17..36926.74 rows=6598 width=256) (actual \ntime=55.896..477.811 rows=8191 loops=1)\n Recheck \nCond: ((date_op >= 12320) AND (date_op <= 12362))\n Filter: (op = 1)\n -> Bitmap \nIndex Scan on i_doc_date_op (cost=0.00..221.52 rows=10490 width=0) \n(actual time=46.639..46.639 rows=11265 loops=1)\n Index \nCond: ((date_op >= 12320) AND (date_op <= 12362))\n -> Hash (cost=49563.16..49563.16 \nrows=441816 width=64) (actual time=2655.370..2655.370 rows=441806 \nloops=1)\n -> Seq Scan on a_klienti kl \n(cost=0.00..49563.16 rows=441816 width=64) (actual \ntime=10.237..2334.909 rows=441806 loops=1)\n -> Hash (cost=77.72..77.72 rows=1672 width=64) \n(actual time=2.138..2.138 rows=1672 loops=1)\n -> Seq Scan on a_slujiteli sl \n(cost=0.00..77.72 rows=1672 width=64) (actual time=0.019..1.005 \nrows=1672 loops=1)\n -> Hash (cost=19.43..19.43 rows=343 width=64) \n(actual time=0.464..0.464 rows=343 loops=1)\n -> Seq Scan on a_location l (cost=0.00..19.43 \nrows=343 width=64) (actual time=0.012..0.263 rows=343 loops=1)\n -> Hash (cost=12.81..12.81 rows=381 width=64) (actual \ntime=0.493..0.493 rows=381 loops=1)\n -> Seq Scan on a_nom_gr nmgr (cost=0.00..12.81 \nrows=381 width=64) (actual time=0.024..0.276 rows=381 loops=1)\n -> Hash (cost=77.72..77.72 rows=1672 width=64) (actual \ntime=1.633..1.633 rows=1672 loops=1)\n -> Seq Scan on a_slujiteli slu (cost=0.00..77.72 rows=1672 \nwidth=64) (actual time=0.004..0.674 rows=1672 loops=1)\n Total runtime: 40565.832 ms\n\n\n\nregards,\nIvan.\n\n\n-------------------------------------\nPowered by Mail.BG - http://mail.bg\n\n", "msg_date": "Mon, 14 Sep 2009 19:07:09 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "����� �� Robert Haas <[email protected]>:\n\n> 2009/9/14 <[email protected]>:\n>> ����� �� Robert Haas <[email protected]>:\n>>\n>>> 2009/9/14 �<[email protected]>:\n>>>>\n>>>> It seems there's something very wrong - the plans are \"equal\" but in the\n>>>> first case the results (actual time) are multiplied by 100. Eithere there\n>>>> is some sort of cache (so the second execution is much faster), or the\n>>>> system was busy during the first execution, or there is something wrong\n>>>> with the hardware.\n>>>\n>>> I think you should run this query more than twice. �If it's slow the\n>>> first time and fast every time for many executions after that, then\n>>> it's probably just the data getting loaded into the OS cache (or\n>>> shared buffers). �If it's bouncing back and forth between fast and\n>>> slow, you might want to check whether your machine is swapping.\n>>\n>> I did it many times. Alter the first atempt it works fast, but after a\n>> couple of minutes ( I think after changing the data in cache) the query is\n>> working also very slow.\n>>\n>> I do not see any swap on OS.\n>>\n>>>\n>>> It might also be helpful to post all the uncommented settings from\n>>> your postgresql.conf file.\n>>\n>> postgresql.conf :\n>>\n>> max_connections = 2000\n>> shared_buffers = 1800MB\n>> temp_buffers = 80MB\n>> work_mem = 120MB\n>>\n>> maintenance_work_mem = 100MB\n>> max_fsm_pages = 404800\n>> max_fsm_relations = 5000\n>>\n>> max_files_per_process = 2000\n>> wal_buffers = 64MB\n>> checkpoint_segments = 30\n>> effective_cache_size = 5000MB\n>> default_statistics_target = 800\n>\n> I think you're exhausting the physical memory on your machine. How\n> much RAM do you have? How many active connections at one time? 120MB\n> is a HUGE value for work_mem. I would try reducing that to, say, 4\n> MB, and see what happens. Your setting for temp_buffers also seems\n> way too high. I would put that one back to the default, at least for\n> starters. And for that matter, why have you increased the value for\n> wal_buffers to over 1000 times the default value?\n>\n\nWe have 8 GB RAM, running Centos 64-bit and ~10 to 15 active \nconnections ( using connection pool).\n120 MB for work mem is good. If I drop this value I will receive very \nbad performance for the hole system.\n\nI will try to reduce wal_buffers ( is this value connected to ram usage ? ).\n\n\n\n> The reason you may not be seeing evidence of swapping is that it may\n> be happening quite briefly during query execution. But I have to\n> think it's happening, because otherwise the performance drop-off is\n> hard to account for.\n>\nOn linux if I have swap the os never restores the ram used for swap. \nAnd I do not see any swap on OS. I send the vmstat for the server:\n 0 0 1388 44852 25160 6225316 0 0 304 0 1018 201 0 \n 0 100 0 0\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n-----cpu------\n r b swpd free buff cache si so bi bo in cs us \nsy id wa st\n 0 0 1388 47612 25148 6222364 0 0 332 4 1015 194 0 \n 0 100 0 0\n 0 0 1388 47072 25156 6222900 0 0 268 8 1015 190 0 \n 0 100 0 0\n 0 0 1388 46532 25160 6223656 0 0 270 0 1014 194 0 \n 0 100 0 0\n\n\n\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n\n-------------------------------------\nPowered by Mail.BG - http://mail.bg\n\n", "msg_date": "Mon, 14 Sep 2009 19:13:53 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "May be you have very bad disk access times (e.g. slow random access)? In\nthis case everything should be OK while data in cache and awful, when not.\nCould you check disk IO speed && IO wait while doing slow & fast query.\n\nBTW: In this case, increasing shared buffers may help. At least this will\nprevent other applications & AFAIK sequence scans to move your index data\nfrom cache.\n\nBest regards, Vitalii Tymchyshyn\n\nMay be you have very bad disk access times (e.g. slow random access)? In this case everything should be OK while data in cache and awful, when not.Could you check disk IO speed && IO wait while doing slow & fast query.\nBTW: In this case, increasing shared buffers may help. At least this will prevent other applications & AFAIK sequence scans to move your index data from cache.Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 15 Sep 2009 09:32:26 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," }, { "msg_contents": "����� �� ³���� �������� <[email protected]>:\n\n> May be you have very bad disk access times (e.g. slow random access)? In\n> this case everything should be OK while data in cache and awful, when not.\n> Could you check disk IO speed && IO wait while doing slow & fast query.\n>\n\nNo, I think all is ok with disks. On my test server I have 8 SATA in \nRAID 10 and on my production server I have 16 SATA in RAID10 dedicated \nfor pg data and also 8 SATA in RAID 10 for OS and pg_x_log and I do \nnot have any IO wait.\nIt is true, disks are much slower compared to RAM.\n\n> BTW: In this case, increasing shared buffers may help. At least this will\n> prevent other applications & AFAIK sequence scans to move your index data\n> from cache.\n\nI will try to increase this value.\nI think recomendation in docs was 1/4 from RAM, and on production \nserver I have it setup to 1/4 from RAM ( 32 GB).\n\nWill os not cache the data from shared buffers for second time ?\n\nThe next step will be to move to pg 8.4, but I i twill tak etime for testing.\n\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\nregards,\nivan.\n\n-------------------------------------\n http://www.tooway.bg/\n\n", "msg_date": "Tue, 15 Sep 2009 13:09:54 +0300", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: possible wrong query plan on pg 8.3.5," } ]
[ { "msg_contents": "Hi!\n\nYesterday I Clustered one big table (# CLUSTER kredyty USING kredyty_pkey;)\nand today one query is extremely slow.\n\nquery:\nSELECT telekredytid FROM kredytyag \nWHERE TRUE \nAND kredytyag.id = 3064776\nAND NOT EXISTS\n (\n SELECT 1 FROM\n (\n SELECT * FROM kredyty kr\n where telekredytid = 328652 \n ORDER BY kr.datazaw DESC LIMIT 1\n )\n kred where kred.bank = 2)\n\nPlan looks strange for me:\n\n\"Result (cost=701.54..709.84 rows=1 width=4)\"\n\" One-Time Filter: (NOT $0)\"\n\" InitPlan\"\n\" -> Subquery Scan kred (cost=0.00..701.54 rows=1 width=0)\"\n\" Filter: (kred.bank = 2)\"\n\" -> Limit (cost=0.00..701.52 rows=1 width=3902)\"\n\" -> Index Scan Backward using kredyty_datazaw on\nkredyty kr (cost=0.00..1067719.61 rows=1522 width=3902)\"\n\" Filter: (telekredytid = 328652)\"\n\" -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..8.30\nrows=1 width=4)\"\n\" Index Cond: (id = 3064776)\"\n\nThis Index skan on kredyty_datazaw and filter telekredytid cost a lot\nof... but why not use kredyty_telekredytid_idx?\n\nBefore Cluster was (or similar):\n\n\"Result (cost=78.98..85.28 rows=1 width=4)\"\n\" One-Time Filter: (NOT $0)\"\n\" InitPlan 1 (returns $0)\"\n\" -> Subquery Scan kred (cost=78.97..78.98 rows=1 width=0)\"\n\" Filter: (kred.bank = 2)\"\n\" -> Limit (cost=78.97..78.97 rows=1 width=3910)\"\n\" -> Sort (cost=78.97..79.20 rows=94 width=3910)\"\n\" Sort Key: kr.datazaw\"\n\" -> Index Scan using kredyty_telekredytid_idx on\nkredyty kr (cost=0.00..78.50 rows=94 width=3910)\"\n\" Index Cond: (telekredytid = 328652)\"\n\" -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..6.30\nrows=1 width=4)\"\n\" Index Cond: (id = 3064776)\"\n\nI've chosen bad index?\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Mon, 14 Sep 2009 15:54:14 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "CLUSTER and a problem" } ]
[ { "msg_contents": "Hi!\n\nYesterday I Clustered one big table (# CLUSTER kredyty USING kredyty_pkey;)\nand today one query is extremely slow.\n\nquery:\nSELECT telekredytid FROM kredytyag \nWHERE TRUE \nAND kredytyag.id = 3064776\nAND NOT EXISTS\n (\n SELECT 1 FROM\n (\n SELECT * FROM kredyty kr\n where telekredytid = 328652 \n ORDER BY kr.datazaw DESC LIMIT 1\n )\n kred where kred.bank = 2)\n\nPlan looks strange for me:\n\n\"Result (cost=701.54..709.84 rows=1 width=4)\"\n\" One-Time Filter: (NOT $0)\"\n\" InitPlan\"\n\" -> Subquery Scan kred (cost=0.00..701.54 rows=1 width=0)\"\n\" Filter: (kred.bank = 2)\"\n\" -> Limit (cost=0.00..701.52 rows=1 width=3902)\"\n\" -> Index Scan Backward using kredyty_datazaw on\nkredyty kr (cost=0.00..1067719.61 rows=1522 width=3902)\"\n\" Filter: (telekredytid = 328652)\"\n\" -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..8.30\nrows=1 width=4)\"\n\" Index Cond: (id = 3064776)\"\n\nThis Index skan on kredyty_datazaw and filter telekredytid cost a lot\nof... but why not use kredyty_telekredytid_idx?\n\nBefore Cluster was (or similar):\n\n\"Result (cost=78.98..85.28 rows=1 width=4)\"\n\" One-Time Filter: (NOT $0)\"\n\" InitPlan 1 (returns $0)\"\n\" -> Subquery Scan kred (cost=78.97..78.98 rows=1 width=0)\"\n\" Filter: (kred.bank = 2)\"\n\" -> Limit (cost=78.97..78.97 rows=1 width=3910)\"\n\" -> Sort (cost=78.97..79.20 rows=94 width=3910)\"\n\" Sort Key: kr.datazaw\"\n\" -> Index Scan using kredyty_telekredytid_idx on\nkredyty kr (cost=0.00..78.50 rows=94 width=3910)\"\n\" Index Cond: (telekredytid = 328652)\"\n\" -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..6.30\nrows=1 width=4)\"\n\" Index Cond: (id = 3064776)\"\n\nI've chosen bad index?\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Mon, 14 Sep 2009 16:19:02 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "CLUSTER and a problem" }, { "msg_contents": "Andrzej,\n\nPlease post a table & index schema, and an EXPLAIN ANALYZE rather than\njust an EXPLAIN. Thanks!\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Mon, 14 Sep 2009 11:05:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLUSTER and a problem" }, { "msg_contents": "Josh Berkus wrote:\n> Andrzej,\n>\n> Please post a table & index schema, and an EXPLAIN ANALYZE rather than\n> just an EXPLAIN. Thanks!\n> \nEXPLAIN ANALYZE is taking too much time ;-) but now database is free so:\n\n# EXPLAIN ANALYZE SElect telekredytid from kredytyag\nWHERE TRUE\nAND kredytyag.id = 3064776\nAND NOT EXISTS\n(SELECT 1 FROM\n( SELECT * FROM kredyty kr\nwhere telekredytid = 328650\nORDER BY kr.datazaw DESC LIMIT 1 )\nkred where kred.bank = 2);\n \nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------\n Result (cost=778.06..786.36 rows=1 width=4) (actual\ntime=2045567.930..2045567.930 rows=0 loops=1)\n One-Time Filter: (NOT $0)\n InitPlan\n -> Subquery Scan kred (cost=0.00..778.06 rows=1 width=0) (actual\ntime=2045556.496..2045556.496 rows=0 loops=1)\n Filter: (kred.bank = 2)\n -> Limit (cost=0.00..778.05 rows=1 width=3873) (actual\ntime=2045556.492..2045556.492 rows=0 loops=1)\n -> Index Scan Backward using kredyty_datazaw on\nkredyty kr (cost=0.00..1088490.39 rows=1399 width=3873) (actual\ntime=2045556.487..2045556.487 rows=\n0 loops=1)\n Filter: (telekredytid = 328650)\n -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..8.30\nrows=1 width=4) (actual time=11.424..11.424 rows=0 loops=1)\n Index Cond: (id = 3064776)\n Total runtime: 2045568.420 ms\n(11 rows)\n\nLike you can see below - disks are very busy\n\n# sar -d -p\n21:36:01 DEV tps rd_sec/s wr_sec/s avgrq-sz \navgqu-sz await svctm %util\n21:38:01 sdd 219.58 3345.82 790.14 18.84 \n1.10 5.01 4.52 99.20\n\n# vmstat 1\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 0 1 3976 93696 58452 14737524 1 1 455 84 0 0 8 \n1 90 2\n 0 1 3976 106532 58384 14723812 0 0 1792 0 545 906 0 \n0 87 12\n 0 1 3976 105452 58488 14725536 0 0 1708 2297 596 549 0 \n0 87 12\n 0 1 3976 102924 58492 14727568 0 0 1996 0 554 566 0 \n0 87 12\n 0 1 3976 102268 58492 14729028 0 0 1744 0 528 540 0 \n0 87 12\n 0 1 3976 99828 58492 14730936 0 0 1624 0 507 492 0 \n0 87 12\n 1 0 3976 98972 58492 14732688 0 0 1720 0 518 507 0 \n0 87 12\n 0 1 3976 96756 58560 14734276 0 0 1636 2020 557 521 0 \n0 87 12\n\n\nSCHEMA: this is big table (too big ;-) too wide ~250 columns so I've\ntrimmed schema - (old database without refactor :-( )\nI hope this is enough?\n\n Table\n\"public.kredyty\"\n Column | Type \n| Modifiers \n---------------------------------------+-----------------------------+--------------------------------------------------------------\n id | integer |\nnot null default nextval(('kredyty_id_seq'::text)::regclass)\n linia | integer |\ndefault (-1)\n sklep | integer |\ndefault (-1)\n agent | integer |\ndefault (-1)\n przedst | integer |\ndefault (-1)\n oddzial | integer |\ndefault (-1)\n datazaw | date |\n datauruch | date |\n telekredytid | integer |\ndefault (-1)\nIndexes:\n \"kredyty_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"kredyty_kredytagid_id_idx\" UNIQUE, btree (kredytagid, id)\n \"kredyty_datazaw\" btree (datazaw)\n \"kredyty_telekredytid_idx\" btree (telekredytid)\n\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Mon, 14 Sep 2009 23:08:31 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CLUSTER and a problem" }, { "msg_contents": "Andrzej Zawadzki <[email protected]> writes:\n> # EXPLAIN ANALYZE SElect telekredytid from kredytyag\n> WHERE TRUE\n> AND kredytyag.id = 3064776\n> AND NOT EXISTS\n> (SELECT 1 FROM\n> ( SELECT * FROM kredyty kr\n> where telekredytid = 328650\n> ORDER BY kr.datazaw DESC LIMIT 1 )\n> kred where kred.bank = 2);\n\nSo this is the slow bit:\n\n> -> Subquery Scan kred (cost=0.00..778.06 rows=1 width=0) (actual\n> time=2045556.496..2045556.496 rows=0 loops=1)\n> Filter: (kred.bank = 2)\n> -> Limit (cost=0.00..778.05 rows=1 width=3873) (actual\n> time=2045556.492..2045556.492 rows=0 loops=1)\n> -> Index Scan Backward using kredyty_datazaw on\n> kredyty kr (cost=0.00..1088490.39 rows=1399 width=3873) (actual\n> time=2045556.487..2045556.487 rows=0 loops=1)\n> Filter: (telekredytid = 328650)\n\nIt's doing a scan in descending datazaw order and hoping to find a row\nthat has both telekredytid = 328650 and bank = 2. Evidently there isn't\none, so the indexscan runs clear to the end before it can report that the\nNOT EXISTS is true. Unfortunately, you've more or less forced this\ninefficient query plan by wrapping some of the search conditions inside a\nLIMIT and some outside. Try phrasing the NOT EXISTS query differently.\nOr, if you do this type of query a lot, a special-purpose index might be\nworthwhile. It would probably be fast as-is if you had an index on\n(telekredytid, datazaw) (in that order).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Sep 2009 19:13:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLUSTER and a problem " }, { "msg_contents": "Tom Lane wrote:\n> Andrzej Zawadzki <[email protected]> writes:\n> \n>> # EXPLAIN ANALYZE SElect telekredytid from kredytyag\n>> WHERE TRUE\n>> AND kredytyag.id = 3064776\n>> AND NOT EXISTS\n>> (SELECT 1 FROM\n>> ( SELECT * FROM kredyty kr\n>> where telekredytid = 328650\n>> ORDER BY kr.datazaw DESC LIMIT 1 )\n>> kred where kred.bank = 2);\n>> \n>\n> So this is the slow bit:\n>\n> \n>> -> Subquery Scan kred (cost=0.00..778.06 rows=1 width=0) (actual\n>> time=2045556.496..2045556.496 rows=0 loops=1)\n>> Filter: (kred.bank = 2)\n>> -> Limit (cost=0.00..778.05 rows=1 width=3873) (actual\n>> time=2045556.492..2045556.492 rows=0 loops=1)\n>> -> Index Scan Backward using kredyty_datazaw on\n>> kredyty kr (cost=0.00..1088490.39 rows=1399 width=3873) (actual\n>> time=2045556.487..2045556.487 rows=0 loops=1)\n>> Filter: (telekredytid = 328650)\n>> \n>\n> It's doing a scan in descending datazaw order and hoping to find a row\n> that has both telekredytid = 328650 and bank = 2. Evidently there isn't\n> one, so the indexscan runs clear to the end before it can report that the\n> NOT EXISTS is true. Unfortunately, you've more or less forced this\n> inefficient query plan by wrapping some of the search conditions inside a\n> LIMIT and some outside. Try phrasing the NOT EXISTS query differently.\n> Or, if you do this type of query a lot, a special-purpose index might be\n> worthwhile. It would probably be fast as-is if you had an index on\n> (telekredytid, datazaw) (in that order).\n> \nThat's no problem - we already has changed this query:\nSELECT * FROM kredyty kr\n where kr.telekredytid = 328652\n and kr.bank = 2\n AND NOT EXISTS (SELECT * from kredyty k2 WHERE k2.bank<>2\nand k2.creationdate > kr.creationdate)\nWorks good.\n\nBut in fact this wasn't my point.\nMy point was: why operation CLUSTER has such a big and bad impact on\nplaner for this query?\nLike I sad: before CLUSTER query was run in xx milliseconds :-)\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Tue, 15 Sep 2009 09:36:38 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CLUSTER and a problem" }, { "msg_contents": "Andrzej Zawadzki wrote:\n> Tom Lane wrote:\n> \n>> Andrzej Zawadzki <[email protected]> writes:\n>> \n>> \n>>> # EXPLAIN ANALYZE SElect telekredytid from kredytyag\n>>> WHERE TRUE\n>>> AND kredytyag.id = 3064776\n>>> AND NOT EXISTS\n>>> (SELECT 1 FROM\n>>> ( SELECT * FROM kredyty kr\n>>> where telekredytid = 328650\n>>> ORDER BY kr.datazaw DESC LIMIT 1 )\n>>> kred where kred.bank = 2);\n>>> \n>>> \n>> So this is the slow bit:\n>>\n>> \n>> \n>>> -> Subquery Scan kred (cost=0.00..778.06 rows=1 width=0) (actual\n>>> time=2045556.496..2045556.496 rows=0 loops=1)\n>>> Filter: (kred.bank = 2)\n>>> -> Limit (cost=0.00..778.05 rows=1 width=3873) (actual\n>>> time=2045556.492..2045556.492 rows=0 loops=1)\n>>> -> Index Scan Backward using kredyty_datazaw on\n>>> kredyty kr (cost=0.00..1088490.39 rows=1399 width=3873) (actual\n>>> time=2045556.487..2045556.487 rows=0 loops=1)\n>>> Filter: (telekredytid = 328650)\n>>> \n>>> \n>> It's doing a scan in descending datazaw order and hoping to find a row\n>> that has both telekredytid = 328650 and bank = 2. Evidently there isn't\n>> one, so the indexscan runs clear to the end before it can report that the\n>> NOT EXISTS is true. Unfortunately, you've more or less forced this\n>> inefficient query plan by wrapping some of the search conditions inside a\n>> LIMIT and some outside. Try phrasing the NOT EXISTS query differently.\n>> Or, if you do this type of query a lot, a special-purpose index might be\n>> worthwhile. It would probably be fast as-is if you had an index on\n>> (telekredytid, datazaw) (in that order).\n>> \n>> \n> That's no problem - we already has changed this query:\n> SELECT * FROM kredyty kr\n> where kr.telekredytid = 328652\n> and kr.bank = 2\n> AND NOT EXISTS (SELECT * from kredyty k2 WHERE k2.bank<>2\n> and k2.creationdate > kr.creationdate)\n> Works good.\n>\n> But in fact this wasn't my point.\n> My point was: why operation CLUSTER has such a big and bad impact on\n> planer for this query?\n> Like I sad: before CLUSTER query was run in xx milliseconds :-)\n>\n> \nBefore CLUSTER was:\n\n# EXPLAIN ANALYZE SELECT telekredytid FROM kredytyag\nWHERE TRUE\nAND kredytyag.id = 3064776\nAND NOT EXISTS\n (\n SELECT 1 FROM\n (\n SELECT * FROM kredyty kr\n where telekredytid = 328652\n ORDER BY kr.datazaw DESC LIMIT 1\n )\n kred where kred.bank = 2)\n;\n \nQUERY\nPLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=1317.25..1325.55 rows=1 width=4) (actual\ntime=0.235..0.235 rows=0 loops=1)\n One-Time Filter: (NOT $0)\n InitPlan\n -> Subquery Scan kred (cost=1317.24..1317.25 rows=1 width=0)\n(actual time=0.188..0.188 rows=0 loops=1)\n Filter: (kred.bank = 2)\n -> Limit (cost=1317.24..1317.24 rows=1 width=4006) (actual\ntime=0.172..0.172 rows=0 loops=1)\n -> Sort (cost=1317.24..1320.27 rows=1212 width=4006)\n(actual time=0.069..0.069 rows=0 loops=1)\n Sort Key: kr.datazaw\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using kredyty_telekredytid_idx on\nkredyty kr (cost=0.00..1311.18 rows=1212 width=4006) (actual\ntime=0.029..0.029 rows=0 loops=1)\n Index Cond: (telekredytid = 328652)\n -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..8.29\nrows=1 width=4) (actual time=0.018..0.018 rows=0 loops=1)\n Index Cond: (id = 3064776)\n Total runtime: 1.026 ms\n(14 rows)\n\nand that's clear for me.\nProbably bad index for CLUSTER - Investigating ;-)\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Tue, 15 Sep 2009 13:13:43 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CLUSTER and a problem" }, { "msg_contents": "Andrzej Zawadzki wrote:\n> Tom Lane wrote:\n> \n>> Andrzej Zawadzki <[email protected]> writes:\n>> \n>> \n>>> # EXPLAIN ANALYZE SElect telekredytid from kredytyag\n>>> WHERE TRUE\n>>> AND kredytyag.id = 3064776\n>>> AND NOT EXISTS\n>>> (SELECT 1 FROM\n>>> ( SELECT * FROM kredyty kr\n>>> where telekredytid = 328650\n>>> ORDER BY kr.datazaw DESC LIMIT 1 )\n>>> kred where kred.bank = 2);\n>>> \n>>> \n>> So this is the slow bit:\n>>\n>> \n>> \n>>> -> Subquery Scan kred (cost=0.00..778.06 rows=1 width=0) (actual\n>>> time=2045556.496..2045556.496 rows=0 loops=1)\n>>> Filter: (kred.bank = 2)\n>>> -> Limit (cost=0.00..778.05 rows=1 width=3873) (actual\n>>> time=2045556.492..2045556.492 rows=0 loops=1)\n>>> -> Index Scan Backward using kredyty_datazaw on\n>>> kredyty kr (cost=0.00..1088490.39 rows=1399 width=3873) (actual\n>>> time=2045556.487..2045556.487 rows=0 loops=1)\n>>> Filter: (telekredytid = 328650)\n>>> \n>>> \n>> It's doing a scan in descending datazaw order and hoping to find a row\n>> that has both telekredytid = 328650 and bank = 2. Evidently there isn't\n>> one, so the indexscan runs clear to the end before it can report that the\n>> NOT EXISTS is true. Unfortunately, you've more or less forced this\n>> inefficient query plan by wrapping some of the search conditions inside a\n>> LIMIT and some outside. Try phrasing the NOT EXISTS query differently.\n>> Or, if you do this type of query a lot, a special-purpose index might be\n>> worthwhile. It would probably be fast as-is if you had an index on\n>> (telekredytid, datazaw) (in that order).\n>> \n>> \n> That's no problem - we already has changed this query:\n> SELECT * FROM kredyty kr\n> where kr.telekredytid = 328652\n> and kr.bank = 2\n> AND NOT EXISTS (SELECT * from kredyty k2 WHERE k2.bank<>2\n> and k2.creationdate > kr.creationdate)\n> Works good.\n>\n> But in fact this wasn't my point.\n> My point was: why operation CLUSTER has such a big and bad impact on\n> planer for this query?\n> Like I sad: before CLUSTER query was run in xx milliseconds :-)\n> \nOK I've got it :-)\nI've prepared test database (on fast disks - CLUSTER took 2h anyway ;-)\n\nStep 1:\nqstest=# CREATE UNIQUE INDEX kredyty_desc_pkey ON kredyty using btree\n(id desc);\nCREATE\nINDEX \nStep 2:\nqstest=# CLUSTER kredyty USING kredyty_desc_pkey;\nCLUSTER \nStep 3:\nqstest=# ANALYZE kredyty;\nANALYZE \nStep 4:\nqstest=# EXPLAIN ANALYZE SELECT telekredytid FROM kredytyag\nWHERE TRUE \nAND kredytyag.id = 3064776 \nAND NOT EXISTS \n ( \n SELECT 1 FROM \n ( \n SELECT * FROM kredyty kr \n where telekredytid = 328652 \n ORDER BY kr.datazaw DESC LIMIT 1 \n ) \n kred where kred.bank = 2) \n; \n \nQUERY\nPLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\n Result (cost=833.09..841.38 rows=1 width=4) (actual\ntime=70.050..70.050 rows=0\nloops=1) \n\n One-Time Filter: (NOT\n$0) \n\n \nInitPlan \n\n -> Subquery Scan kred (cost=833.07..833.09 rows=1 width=0)\n(actual time=48.223..48.223 rows=0\nloops=1) \n Filter: (kred.bank =\n2) \n\n -> Limit (cost=833.07..833.08 rows=1 width=3975) (actual\ntime=48.206..48.206 rows=0\nloops=1) \n -> Sort (cost=833.07..835.66 rows=1035 width=3975)\n(actual time=48.190..48.190 rows=0\nloops=1) \n Sort Key: kr.datazaw\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using kredyty_telekredytid_idx on\nkredyty kr (cost=0.00..827.90 rows=1035 width=3975) (actual\ntime=48.163..48.163 rows=0 loops=1)\n Index Cond: (telekredytid = 328652)\n -> Index Scan using kredytyag_pkey on kredytyag (cost=0.00..8.29\nrows=1 width=4) (actual time=21.798..21.798 rows=0 loops=1)\n Index Cond: (id = 3064776)\n Total runtime: 70.550 ms\n(14 rows)\n\nqstest=#\n\nSo, I was close - bad index... DESCending is much better.\nThanks to Grzegorz Ja\\skiewicz hi has strengthened me in the conjecture.\n\nI'm posting this - maybe someone will find something useful in that case.\n\nps. query was and is good :-)\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Tue, 15 Sep 2009 22:10:49 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CLUSTER and a problem" }, { "msg_contents": "On Tue, Sep 15, 2009 at 9:10 PM, Andrzej Zawadzki <[email protected]> wrote:\n\n> So, I was close - bad index... DESCending is much better.\n> Thanks to Grzegorz Ja\\skiewicz  hi has strengthened me in the conjecture.\n>\n> I'm posting this - maybe someone will find something useful in that case.\n>\n> ps. query was and is good :-)\n\n\nSure, This was talked about a lot on -hackers. The cost of 'back-walk'\nindex fetch is a lot.\nSo for anyone who thought it isn't back then, well - here's real life proof.\n\n\n\n-- \nGJ\n", "msg_date": "Wed, 16 Sep 2009 10:34:42 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLUSTER and a problem" } ]
[ { "msg_contents": "Users,\n\nPlease read the following two documents before posting your performance\nquery here:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nThis will help other users to troubleshoot your problems far more rapidly.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Mon, 14 Sep 2009 13:55:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "How to post Performance Questions" }, { "msg_contents": "On Sep 14, 2009, at 16:55 , Josh Berkus wrote:\n\n> Users,\n>\n> Please read the following two documents before posting your \n> performance\n> query here:\n>\n> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> This will help other users to troubleshoot your problems far more \n> rapidly.\n\nCan something similar be added to the footer of (at least) the \nperformance list?\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Mon, 14 Sep 2009 19:19:03 -0400", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to post Performance Questions" }, { "msg_contents": "Michael Glaesemann <[email protected]> wrote:\n> On Sep 14, 2009, at 16:55 , Josh Berkus wrote:\n \n>> Please read the following two documents before posting your \n>> performance query here:\n>>\n>> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>>\n>> This will help other users to troubleshoot your problems far\n>> more rapidly.\n> \n> Can something similar be added to the footer of (at least) the \n> performance list?\n \nPerhaps on this page?:\n \nhttp://www.postgresql.org/community/lists/\n \n-Kevin\n", "msg_date": "Tue, 15 Sep 2009 09:50:10 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to post Performance Questions" }, { "msg_contents": "Kevin Grittner wrote:\n> Michael Glaesemann <[email protected]> wrote:\n> > On Sep 14, 2009, at 16:55 , Josh Berkus wrote:\n> \n> >> Please read the following two documents before posting your \n> >> performance query here:\n> >>\n> >> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n> >> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n> >>\n> >> This will help other users to troubleshoot your problems far\n> >> more rapidly.\n> > \n> > Can something similar be added to the footer of (at least) the \n> > performance list?\n> \n> Perhaps on this page?:\n> \n> http://www.postgresql.org/community/lists/\n\nDone this part. (It'll take some time to propagate.)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 21 Sep 2009 14:22:38 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to post Performance Questions" } ]
[ { "msg_contents": "Problem occurs when running (in production) Postgres 8.3.7 64-bit (from RPM) \non Ubuntu 8.04.2, on an Amazon EC2 (xen) \"Large\" instance (8GB RAM), with \nthe DB on a 50GB EC2 block device.\nProblem does not occur when running (in staging/pre-production) Postgres \n8.3.5 32-bit (from RPM) on Ubuntu 8.04.1, on a less beefy Amazon EC2 (xen) \n\"Small\" instance, with the DB on a 5GB EC2 block device.\n\nI am running with slow query logging on, and somewhat inexplicably I've been \ngetting the following slow UPDATE query several times in the past weeks (I'm \nalso including some context lines above and below):\n\n 2009-09-14 08:07:06.238 UTC user@database pid=26524 ip=127.0.0.1(58380) \nsid=4aadf5ba.679c:1 LOG: duration: 103.905 ms\n statement: COMMIT\n 2009-09-14 08:10:19.025 UTC user@database pid=26524 ip=127.0.0.1(58380) \nsid=4aadf5ba.679c:2 LOG: duration: 124.341 ms\n statement: COMMIT\n 2009-09-14 08:10:47.359 UTC user@database pid=26524 ip=127.0.0.1(58380) \nsid=4aadf5ba.679c:3 LOG: duration: 126.896 ms\n statement: COMMIT\n>> 2009-09-14 08:12:30.363 UTC user@database pid=26474 ip=127.0.0.1(58364) \n>> sid=4aadf58d.676a:1 LOG: duration: 13472.892 ms\n>> statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \n>> \"updated_at\" = '2009-09-14 08:12:16.890054' WHERE \"id\" = 288\n 2009-09-14 08:13:41.237 UTC user@database pid=26474 ip=127.0.0.1(58364) \nsid=4aadf58d.676a:2 LOG: duration: 107.674 ms\n statement: SELECT * FROM \"tenders\"\n\nThis is one of the \"faster\" occurrences; at times the query has been logged \nas having taken 100+ seconds:\n\n2009-07-21 06:05:23.035 UTC user@database pid=24834 ip=127.0.0.1(34505) \nsid=4a6559e1.6102:1 LOG: duration: 156605.004 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-07-21 06:02:46.430176' WHERE \"id\" = 318\n...\n2009-07-21 06:16:32.148 UTC user@database pid=23500 ip=127.0.0.1(38720) \nsid=4a6552dd.5bcc:2 LOG: duration: 14833.439 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-07-21 06:16:17.314905' WHERE \"id\" = 188\n...\n2009-08-11 07:31:35.867 UTC user@database pid=1227 ip=127.0.0.1(55630) \nsid=4a811ded.4cb:1 LOG: duration: 29258.137 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-08-11 07:31:06.609191' WHERE \"id\" = 251\n...\n2009-08-13 11:16:40.027 UTC user@database pid=13442 ip=127.0.0.1(41127) \nsid=4a83f557.3482:1 LOG: duration: 10287.765 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-08-13 11:16:29.738634' WHERE \"id\" = 273\n...\n2009-08-16 05:30:09.082 UTC user@database pid=3505 ip=127.0.0.1(36644) \nsid=4a8798aa.db1:1 LOG: duration: 153523.612 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'approved', \"updated_at\" \n= '2009-08-16 05:27:35.558505' WHERE \"id\" = 369\n...\n2009-08-16 05:30:09.673 UTC user@database pid=3518 ip=127.0.0.1(36655) \nsid=4a8798c8.dbe:1 LOG: duration: 114885.274 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'approved', \"updated_at\" \n= '2009-08-16 05:28:14.787689' WHERE \"id\" = 369\n...\n2009-08-16 05:30:10.318 UTC user@database pid=3580 ip=127.0.0.1(36707) \nsid=4a879919.dfc:1 LOG: duration: 73107.179 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'approved', \"updated_at\" \n= '2009-08-16 05:28:57.210502' WHERE \"id\" = 369\n...\n2009-08-20 06:27:54.961 UTC user@database pid=8312 ip=127.0.0.1(38488) \nsid=4a8cec7b.2078:1 LOG: duration: 18959.648 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-08-20 06:27:36.001030' WHERE \"id\" = 255\n...\n2009-09-10 06:30:08.176 UTC user@database pid=25992 ip=127.0.0.1(59692) \nsid=4aa89ac1.6588:1 LOG: duration: 27495.609 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-09-10 06:29:40.680647' WHERE \"id\" = 346\n...\n2009-09-14 08:12:30.363 UTC user@database pid=26474 ip=127.0.0.1(58364) \nsid=4aadf58d.676a:1 LOG: duration: 13472.892 ms\n statement: UPDATE \"document_sets\" SET \"status\" = E'rejected', \"updated_at\" \n= '2009-09-14 08:12:16.890054' WHERE \"id\" = 288\n\nNo other UPDATE or INSERT or DELETE operation has ever triggered the slow \nquery logging. No other query of any kind has ever taken more than 500ms.\n\nWhen analyzing this UPDATE query (verbose output at end of message), as \nexpected:\n\n> Index Scan using document_sets_pkey on document_sets (cost=0.00..8.27 \n> rows=1 width=47) (actual time=0.025..0.028 rows=1 loops=1)\n> Index Cond: (id = 288)\n> Total runtime: 0.137 ms\n\nActual table rowcount: 423 rows. Table statistics:\n Sequential Scans 12174674\n Sequential Tuples Read 2764651442\n Index Scans 813\n Index Tuples Fetched 813\n Tuples Inserted 424\n Tuples Updated 625\n Tuples Deleted 1\n Heap Blocks Read 9\n Heap Blocks Hit 35888949\n Index Blocks Read 11\n Index Blocks Hit 1858\n Toast Blocks Read 0\n Toast Blocks Hit 0\n Toast Index Blocks Read 0\n Toast Index Blocks Hit 0\n Table Size 40 kB\n Toast Table Size 8192 bytes\n Indexes Size 32 kB\n\nOnly entries changed in postgresql.conf default (mainly to reflect running \non an 8GB machine):\n> shared_buffers = 256MB # min 128kB or \n> max_connections*16kB\n> work_mem = 4MB # min 64kB\n> checkpoint_segments = 8 # in logfile segments, min \n> 1, 16MB each\n> effective_cache_size = 1280MB\n> log_min_duration_statement = 100 # -1 is disabled, 0 logs all \n> statements\n> log_line_prefix = '%m %u@%d pid=%p ip=%r sid=%c:%l '\n\nsar data for period in question (CPU, processes, paging, disk, etc.) shows \nno obvious change in CPU usage, number of forks, load, or disk activity on \nthe database device (sdh).\n\n# atopsar -b 7:00 -e 10:00 | grep all\nip-XXXXXX 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 \n2009/09/14\n-------------------------- analysis date: \n2009/09/14 --------------------------\n07:04:15 cpu %usr %nice %sys %irq %softirq %steal %wait %idle \n_cpu_\n07:14:15 all 1 0 0 0 0 0 0 98\n07:24:15 all 2 0 0 0 0 0 0 97\n07:34:15 all 3 0 0 0 0 0 0 97\n07:44:15 all 1 0 0 0 0 0 0 99\n07:54:15 all 2 0 0 0 0 0 0 98\n08:04:15 all 2 0 0 0 0 0 0 98\n08:14:15 all 2 0 0 0 0 0 0 97 \n<--- slow query occurred at 8:12\n08:24:15 all 6 0 1 0 0 0 0 94\n08:34:16 all 8 0 0 0 0 0 0 92\n08:44:16 all 2 0 0 0 0 0 0 98\n08:54:16 all 1 0 0 0 0 0 0 99\n09:04:16 all 3 0 0 0 0 0 0 97\n09:14:16 all 6 0 1 0 0 0 0 94\n09:24:16 all 1 0 1 0 0 0 0 98\n09:34:16 all 1 0 0 0 0 0 0 98\n09:44:16 all 2 0 0 0 0 0 0 98\n09:54:17 all 2 0 0 0 0 0 0 98\n\n# atopsar -b 7:00 -e 10:00 -s\nip-XXXXXX 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 \n2009/09/14\n-------------------------- analysis date: \n2009/09/14 --------------------------\n07:04:15 pagescan/s swapin/s swapout/s commitspc commitlim \n_swap_\n07:14:15 0.00 0.00 0.00 1647M 3587M\n07:24:15 0.00 0.00 0.00 1447M 3587M\n07:34:15 0.00 0.00 0.00 2038M 3587M\n07:44:15 0.00 0.00 0.00 1444M 3587M\n07:54:15 0.00 0.00 0.00 2082M 3587M\n08:04:15 0.00 0.00 0.00 1658M 3587M\n08:14:15 0.00 0.00 0.00 1990M 3587M \n<--- slow query occurred at 8:12\n08:24:15 0.00 0.00 0.00 1709M 3587M\n08:34:16 0.00 0.00 0.00 1507M 3587M\n08:44:16 0.00 0.00 0.00 1894M 3587M\n08:54:16 0.00 0.00 0.00 1640M 3587M\n09:04:16 0.00 0.00 0.00 1775M 3587M\n09:14:16 0.00 0.00 0.00 2209M 3587M\n09:24:16 0.00 0.00 0.00 2035M 3587M\n09:34:16 0.00 0.00 0.00 1887M 3587M\n09:44:16 0.00 0.00 0.00 1922M 3587M\n09:54:17 0.00 0.00 0.00 2138M 3587M\n\n# atopsar -b 7:00 -e 10:00 -d\nip-XXXXXXX 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 \n2009/09/14\n-------------------------- analysis date: \n2009/09/14 --------------------------\n07:04:15 disk busy read/s KB/read write/s KB/writ avque avserv \n_disk_\n07:24:15 sdh 0% 0.00 0.0 3.10 14.2 3.41 1.17 ms\n07:34:15 sdh 0% 0.00 0.0 3.43 14.0 3.85 1.01 ms\n07:44:15 sdh 0% 0.00 0.0 2.76 13.8 4.51 1.14 ms\n07:54:15 sdh 0% 0.00 0.0 3.11 13.7 2.87 1.11 ms\n08:04:15 sdh 0% 0.00 0.0 3.33 13.4 3.85 1.13 ms\n08:14:15 sdh 0% 0.00 0.0 3.71 13.3 3.37 1.25 ms \n<--- slow query occurred at 8:12\n08:24:15 sdh 0% 0.00 0.0 2.95 13.4 5.04 1.13 ms\n08:34:16 sdh 0% 0.00 0.0 3.27 13.4 4.17 1.05 ms\n08:44:16 sdh 0% 0.00 0.0 2.73 13.7 3.31 1.17 ms\n08:54:16 sdh 0% 0.00 0.0 1.57 13.9 4.02 1.14 ms\n09:04:16 sdh 0% 0.00 0.0 2.05 14.2 4.75 0.96 ms\n09:14:16 sdh 0% 0.00 0.0 3.87 14.0 4.19 1.10 ms\n09:24:16 sdh 0% 0.00 0.0 3.26 13.9 4.17 1.11 ms\n09:34:16 sdh 0% 0.00 0.0 2.17 14.1 3.67 1.18 ms\n09:44:16 sdh 0% 0.00 0.0 2.72 15.0 3.63 0.99 ms\n09:54:17 sdh 0% 0.00 0.0 3.15 15.0 4.40 1.21 ms\n\n# atopsar -b 7:00 -e 10:00 -p\nip-XXXXXXX 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64 \n2009/09/14\n-------------------------- analysis date: \n2009/09/14 --------------------------\n07:04:15 pswch/s devintr/s forks/s loadavg1 loadavg5 loadavg15 \n_load_\n07:14:15 174 163 0.90 0.21 0.20 0.18\n07:24:15 207 228 0.90 0.24 0.28 0.21\n07:34:15 250 233 0.88 0.18 0.26 0.21\n07:44:15 176 172 0.97 0.04 0.13 0.16\n07:54:15 215 203 0.88 0.26 0.24 0.18\n08:04:15 177 162 0.96 0.23 0.22 0.18\n08:14:15 212 259 0.90 0.33 0.29 0.21 <--- \nslow query occurred at 8:12\n08:24:15 220 266 1.08 1.29 0.75 0.40\n08:34:16 207 290 0.84 0.25 0.51 0.49\n08:44:16 178 175 0.95 0.09 0.21 0.33\n08:54:16 159 156 0.82 0.12 0.12 0.20\n09:04:16 185 324 0.78 0.44 0.33 0.24\n09:14:16 279 505 0.92 0.49 0.56 0.38\n09:24:16 231 222 0.87 0.18 0.33 0.34\n09:34:16 164 166 0.74 0.21 0.15 0.21\n09:44:16 210 191 0.78 0.78 0.34 0.23\n09:54:17 240 224 1.03 0.32 0.27 0.23\n\n\n\n\n\n\nPossible causes?\n1) I/O bottleneck? Bad block on device? Yet \"sar -d\" shows no change in I/O \nservice time\n2) lock contention with (autovacuum?)? why would this not affect other \nstatements on other tables?\n3) clock change? Yet why only during this particular UPDATE query?\n4) ???\n\nThank you,\nV.\n\n\n\n\n- - - -\n\n\nExplain verbose:\n\n> {INDEXSCAN\n> :startup_cost 0.00\n> :total_cost 8.27\n> :plan_rows 1\n> :plan_width 47\n> :targetlist (\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno 1\n> :vartype 23\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno 1\n> }\n> :resno 1\n> :resname id\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno 2\n> :vartype 23\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno 2\n> }\n> :resno 2\n> :resname tender_id\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno 3\n> :vartype 25\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno 3\n> }\n> :resno 3\n> :resname note\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {CONST\n> :consttype 1043\n> :consttypmod 259\n> :constlen -1\n> :constbyval false\n> :constisnull false\n> :constvalue 12 [ 48 0 0 0 114 101 106 101 99 116 101 100 ]\n> }\n> :resno 4\n> :resname status\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno 5\n> :vartype 1114\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno 5\n> }\n> :resno 5\n> :resname created_at\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {CONST\n> :consttype 1114\n> :consttypmod -1\n> :constlen 8\n> :constbyval false\n> :constisnull false\n> :constvalue 8 [ -58 44 34 -2 -125 22 1 0 ]\n> }\n> :resno 6\n> :resname updated_at\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno -1\n> :vartype 27\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno -1\n> }\n> :resno 7\n> :resname ctid\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk true\n> }\n> )\n> :qual <>\n> :lefttree <>\n> :righttree <>\n> :initPlan <>\n> :extParam (b)\n> :allParam (b)\n> :scanrelid 1\n> :indexid 18771\n> :indexqual (\n> {OPEXPR\n> :opno 96\n> :opfuncid 65\n> :opresulttype 16\n> :opretset false\n> :args (\n> {VAR\n> :varno 1\n> :varattno 1\n> :vartype 23\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno 1\n> }\n> {CONST\n> :consttype 23\n> :consttypmod -1\n> :constlen 4\n> :constbyval true\n> :constisnull false\n> :constvalue 4 [ 32 1 0 0 0 0 0 0 ]\n> }\n> )\n> }\n> )\n> :indexqualorig (\n> {OPEXPR\n> :opno 96\n> :opfuncid 65\n> :opresulttype 16\n> :opretset false\n> :args (\n> {VAR\n> :varno 1\n> :varattno 1\n> :vartype 23\n> :vartypmod -1\n> :varlevelsup 0\n> :varnoold 1\n> :varoattno 1\n> }\n> {CONST\n> :consttype 23\n> :consttypmod -1\n> :constlen 4\n> :constbyval true\n> :constisnull false\n> :constvalue 4 [ 32 1 0 0 0 0 0 0 ]\n> }\n> )\n> }\n> )\n> :indexstrategy (i 3)\n> :indexsubtype (o 23)\n> :indexorderdir 1\n> }\n\n\n", "msg_date": "Mon, 14 Sep 2009 17:25:33 -0400", "msg_from": "\"Vlad Romascanu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Possible causes of sometimes slow single-row UPDATE with trivial\n\tindexed condition?" }, { "msg_contents": "Vlad Romascanu wrote:\n> Problem occurs when running (in production) Postgres 8.3.7 64-bit (from\n> RPM) on Ubuntu 8.04.2, on an Amazon EC2 (xen) \"Large\" instance (8GB\n> RAM), with the DB on a 50GB EC2 block device.\n\nHmm - don't know what the characteristics of running PG on EC2 are. This\nmight be something peculiar to that.\n\n> Problem does not occur when running (in staging/pre-production) Postgres\n> 8.3.5 32-bit (from RPM) on Ubuntu 8.04.1, on a less beefy Amazon EC2\n> (xen) \"Small\" instance, with the DB on a 5GB EC2 block device.\n> \n> I am running with slow query logging on, and somewhat inexplicably I've\n> been getting the following slow UPDATE query several times in the past\n> weeks (I'm also including some context lines above and below):\n\n>>> 2009-09-14 08:12:30.363 UTC user@database pid=26474\n>>> ip=127.0.0.1(58364) sid=4aadf58d.676a:1 LOG: duration: 13472.892 ms\n>>> statement: UPDATE \"document_sets\" SET \"status\" = E'rejected',\n\n> This is one of the \"faster\" occurrences; at times the query has been\n> logged as having taken 100+ seconds:\n\nThat's *very* slow indeed, and clearly the query itself is simple enough.\n\nTypically in a situation like this you might suspect checkpointing was\nthe problem. Lots of dirty disk pages being flushed to disk before a\ncheckpoint. The stats for disk activity you give don't support that\nidea, although 10 minute intervals is quite far apart.\n\nYour table-stats show this is a small table. If it's updated a lot then\nit might be that your autovacuum settings aren't high enough for this\ntable. The log_autovacuum_min_duration setting might be worth enabling\ntoo - to see if autovacuum is taking a long time over anything.\n\nAnother thing that can cause irregular slowdowns is if you have a\ntrigger with some custom code that takes an unexpectedly long time to\nrun (takes locks, runs a query that plans badly occasionally). I don't\nknow if that's the case here.\n\nOh, if you don't have indexes on \"status\" or \"updated_at\" then you might\nwant to read up on HOT and decrease your fill-factor on the table too.\nThat's unrelated to this though.\n\n\nIt looks like the problem is common enough that you could have a small\nscript check pg_stat_activity once every 10 seconds and dump a snapshot\nof pg_locks, vmstat etc. If you can catch the problem happening that\nshould make it easy to diagnose.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 17 Sep 2009 08:58:35 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible causes of sometimes slow single-row UPDATE\n\twith trivial indexed condition?" }, { "msg_contents": "Vlad Romascanu wrote:\n> Problem occurs when running (in production) Postgres 8.3.7 64-bit (from \n> RPM) on Ubuntu 8.04.2, on an Amazon EC2 (xen) \"Large\" instance (8GB \n> RAM), with the DB on a 50GB EC2 block device.\n> Problem does not occur when running (in staging/pre-production) Postgres \n> 8.3.5 32-bit (from RPM) on Ubuntu 8.04.1, on a less beefy Amazon EC2 \n> (xen) \"Small\" instance, with the DB on a 5GB EC2 block device.\n> \n> I am running with slow query logging on, and somewhat inexplicably I've \n> been getting the following slow UPDATE query several times in the past \n> weeks (I'm also including some context lines above and below):\n> \n\nI'm not sure how Amazon vm's work, but are there multiple vm's on one \nbox? Just because your vm has zero cpu/disk does not mean the host \nisn't pegged out of its mind.\n\nDoes Amazon give any sort of host stats?\n\n-Andy\n", "msg_date": "Thu, 17 Sep 2009 08:45:43 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible causes of sometimes slow single-row UPDATE\n\twith trivial indexed condition?" }, { "msg_contents": "Hi, Andy,\n\nImpact of other VMs is reflected as %steal time. And it's only this \nspecific UPDATE query on this specific table that ever exhibits the problem, \nas measured over many days, a query that must represent something like <2% \nof all queries run against the DB over the same period and <5% of all other \nUPDATE queries running on the system across other tables in the same \ntablespace over, again, the same period.\n\nV.\n\n----- Original Message ----- \nFrom: \"Andy Colson\" <[email protected]>\nTo: \"Vlad Romascanu\" <[email protected]>; \n<[email protected]>\nSent: Thursday, September 17, 2009 9:45 AM\nSubject: Re: [PERFORM] Possible causes of sometimes slow single-row UPDATE \nwith trivial indexed condition?\n\n\n> Vlad Romascanu wrote:\n>> Problem occurs when running (in production) Postgres 8.3.7 64-bit (from \n>> RPM) on Ubuntu 8.04.2, on an Amazon EC2 (xen) \"Large\" instance (8GB RAM), \n>> with the DB on a 50GB EC2 block device.\n>> Problem does not occur when running (in staging/pre-production) Postgres \n>> 8.3.5 32-bit (from RPM) on Ubuntu 8.04.1, on a less beefy Amazon EC2 \n>> (xen) \"Small\" instance, with the DB on a 5GB EC2 block device.\n>>\n>> I am running with slow query logging on, and somewhat inexplicably I've \n>> been getting the following slow UPDATE query several times in the past \n>> weeks (I'm also including some context lines above and below):\n>>\n>\n> I'm not sure how Amazon vm's work, but are there multiple vm's on one box? \n> Just because your vm has zero cpu/disk does not mean the host isn't pegged \n> out of its mind.\n>\n> Does Amazon give any sort of host stats?\n>\n> -Andy\n> \n\n", "msg_date": "Thu, 17 Sep 2009 12:42:03 -0400", "msg_from": "\"Vlad Romascanu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible causes of sometimes slow single-row UPDATE with trivial\n\tindexed condition?" } ]
[ { "msg_contents": "In this linux mag article (http://www.linux-mag.com/cache/7516/1.html)\nthe author describes a performance problem\nbrought on by using the noapic boot time kernel option. Has anyone\ninvestigated whether postgres performs better\nwith/without the noapic option?\n", "msg_date": "Mon, 14 Sep 2009 14:48:17 -0700 (PDT)", "msg_from": "C Storm <[email protected]>", "msg_from_op": true, "msg_subject": "noapic option" }, { "msg_contents": "On Mon, 2009-09-14 at 14:48 -0700, C Storm wrote:\n> In this linux mag article (http://www.linux-mag.com/cache/7516/1.html)\n> the author describes a performance problem\n> brought on by using the noapic boot time kernel option. Has anyone\n> investigated whether postgres performs better\n> with/without the noapic option?\n\nIt probably depends a lot on how your devices are arranged on the PCI\nand PCIe bus(es) and how the kernel/bios assigns interrupt lines. If\nbusy/active devices share interrupts with other devices, especially if\nthose devices take significant work to poll when an interrupt is\nreceived, it could have a nasty effect on performance. On the other\nhand, if your high-load devices like NICs and disk controller(s) don't\nland up sharing interrupts, AFAIK it may not make much difference. I\ndon't know how much difference the local APIC(s) and IO-APIC make as\ncompared to the 8259 PIC when shared interrupts aren't an issue.\n\nThen again, I'm surprised any modern machine can run without an IO-APIC.\nIsn't it required for SMP or multi-core operation?\n\n\n\nThis article might help provide some information. While it's about\nWindows and is on MSDN, the principles it describes about how the local\nAPIC(s) and IO-APIC(s) help should apply equally well to Linux and other\nsystems.\n\nhttp://www.microsoft.com/whdc/archive/io-apic.mspx\n\nI don't think the issues with synchronization primitives really apply on\n*nix systems, but the issues with interrupt latency certainly do.\n\n\nAs you can see from the article, having a working system of local and\nI/O APICs should dramatically reduce wasted bus I/O resources and CPU\ntime required to service interrupts especially on highly shared\ninterrupt lines. Consider one of the servers here:\n\n$ cat /proc/interrupts \n CPU0 CPU1 \n 0: 90 0 IO-APIC-edge timer\n 1: 16 0 IO-APIC-edge i8042\n 4: 1368 0 IO-APIC-edge serial\n 6: 3 0 IO-APIC-edge floppy\n 8: 0 0 IO-APIC-edge rtc0\n 9: 0 0 IO-APIC-fasteoi acpi\n 14: 0 0 IO-APIC-edge ide0\n 15: 0 0 IO-APIC-edge ide1\n 28: 64040415 0 IO-APIC-fasteoi 3w-xxxx\n 48: 668225084 0 IO-APIC-fasteoi eth1000\n\nSee how the highly active 3Ware 8500-8 (3w-xxxx) disk controller and the\nIntel EtherExpress 10/100/1000 (eth1000) have their own private\ninterrupt lines on interrupts 28 and 48 ? Without APICs they might be\nforced to share, or at least be placed on the same interrupt as (eg) a\nUSB controller, a PATA disk controller, or whatever. That might force\nthe OS to do work for those devices too when it receives an interrupt on\nthat IRQ line. Not ideal.\n\n(Interestingly, this is a real dual-CPU system but all interrupts are\nbeing serviced by the first CPU. Whoops. apt-get install irqbalance).\n\n\n--\nCraig Ringer\n\n", "msg_date": "Thu, 17 Sep 2009 16:00:31 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: noapic option" }, { "msg_contents": "Craig,\n\nThank you for the detailed reply. Thanks for walking me through the\nthought process. Also thanks for serendipitous irqbalance suggestion.\n\nCheers\n\nOn Sep 17, 1:00 am, [email protected] (Craig Ringer) wrote:\n> On Mon, 2009-09-14 at 14:48 -0700, C Storm wrote:\n> > In this linux mag article (http://www.linux-mag.com/cache/7516/1.html)\n> > the author describes a performance problem\n> > brought on by using the noapic boot time kernel option.  Has anyone\n> > investigated whether postgres performs better\n> > with/without the noapic option?\n>\n> It probably depends a lot on how your devices are arranged on the PCI\n> and PCIe bus(es) and how the kernel/bios assigns interrupt lines. If\n> busy/active devices share interrupts with other devices, especially if\n> those devices take significant work to poll when an interrupt is\n> received, it could have a nasty effect on performance. On the other\n> hand, if your high-load devices like NICs and disk controller(s) don't\n> land up sharing interrupts, AFAIK it may not make much difference. I\n> don't know how much difference the local APIC(s) and IO-APIC make as\n> compared to the 8259 PIC when shared interrupts aren't an issue.\n>\n> Then again, I'm surprised any modern machine can run without an IO-APIC.\n> Isn't it required for SMP or multi-core operation?\n>\n> This article might help provide some information. While it's about\n> Windows and is on MSDN, the principles it describes about how the local\n> APIC(s) and IO-APIC(s) help should apply equally well to Linux and other\n> systems.\n>\n> http://www.microsoft.com/whdc/archive/io-apic.mspx\n>\n> I don't think the issues with synchronization primitives really apply on\n> *nix systems, but the issues with interrupt latency certainly do.\n>\n> As you can see from the article, having a working system of local and\n> I/O APICs should dramatically reduce wasted bus I/O resources and CPU\n> time required to service interrupts especially on highly shared\n> interrupt lines. Consider one of the servers here:\n>\n> $ cat /proc/interrupts\n>            CPU0       CPU1      \n>   0:         90          0   IO-APIC-edge      timer\n>   1:         16          0   IO-APIC-edge      i8042\n>   4:       1368          0   IO-APIC-edge      serial\n>   6:          3          0   IO-APIC-edge      floppy\n>   8:          0          0   IO-APIC-edge      rtc0\n>   9:          0          0   IO-APIC-fasteoi   acpi\n>  14:          0          0   IO-APIC-edge      ide0\n>  15:          0          0   IO-APIC-edge      ide1\n>  28:   64040415          0   IO-APIC-fasteoi   3w-xxxx\n>  48:  668225084          0   IO-APIC-fasteoi   eth1000\n>\n> See how the highly active 3Ware 8500-8 (3w-xxxx) disk controller and the\n> Intel EtherExpress 10/100/1000 (eth1000) have their own private\n> interrupt lines on interrupts 28 and 48 ? Without APICs they might be\n> forced to share, or at least be placed on the same interrupt as (eg) a\n> USB controller, a PATA disk controller, or whatever. That might force\n> the OS to do work for those devices too when it receives an interrupt on\n> that IRQ line. Not ideal.\n>\n> (Interestingly, this is a real dual-CPU system but all interrupts are\n> being serviced by the first CPU. Whoops. apt-get install irqbalance).\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 17 Sep 2009 11:03:33 -0700 (PDT)", "msg_from": "\"christian.storm\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: noapic option" } ]
[ { "msg_contents": "Hello\nI have a database where I daily create a table.\nEvery day it is being inserted with ~3mln rows and each of them is being\nupdated two times.The process lasts ~24 hours so the db load is the same at\nall the time. total size of the table is ~3GB.\n\nMy current vacuum settings are:\nautovacuum = on\nautovacuum_max_workers = 3\nautovacuum_freeze_max_age = 2000000000 (changed from 200000000)\nvacuum_freeze_min_age = 100000000\n\nI have over 250 mln of frozen ids.\n# SELECT datname, age(datfrozenxid) FROM pg_database;\n datname | age\n------------+-----------\nmy_database | 256938425\n\nand every day (since max age exceeded 200mln.) the current table is being\nvacuumed two hours after it was created.\n\nMy goal is to set the vacuum properties so the current table is not vacuumed\nwhen it is used. And to vacuum it manually one day after it was used.\n\nIs it enough to set\nautovacuum=off\nautovacuum_freeze_max_age=2000000000\nvacuum_freeze_min_age = 100000000\nand shedule in cron daily vacuum on selected table?\n\n\nThanks in advance for your help.\n\n-- \nLudwik Dyląg\n\nHelloI have a database where I daily create a table.Every day it is being inserted with ~3mln rows and each of them is being updated two times.The process lasts ~24 hours so the db load is the same at all the time. total size of the table is ~3GB.\nMy current vacuum settings are:autovacuum = onautovacuum_max_workers = 3autovacuum_freeze_max_age = 2000000000 (changed from 200000000)vacuum_freeze_min_age = 100000000\nI have over 250 mln of frozen ids.# SELECT datname, age(datfrozenxid) FROM pg_database;  datname   |    age------------+-----------my_database | 256938425\nand every day (since max age exceeded 200mln.) the current table is being vacuumed two hours after it was created.My goal is to set the vacuum properties so the current table is not vacuumed when it is used. And to vacuum it manually one day after it was used.\nIs it enough to setautovacuum=offautovacuum_freeze_max_age=2000000000vacuum_freeze_min_age = 100000000and shedule in cron daily vacuum on selected table?\nThanks in advance for your help.-- Ludwik Dyląg", "msg_date": "Tue, 15 Sep 2009 10:06:21 +0200", "msg_from": "Ludwik Dylag <[email protected]>", "msg_from_op": true, "msg_subject": "disable heavily updated (but small) table auto-vecuuming" }, { "msg_contents": "2009/9/15 Ludwik Dylag <[email protected]>:\n> Hello\n> I have a database where I daily create a table.\n> Every day it is being inserted with ~3mln rows and each of them is being\n> updated two times.The process lasts ~24 hours so the db load is the same at\n> all the time. total size of the table is ~3GB.\n> My current vacuum settings are:\n> autovacuum = on\n> autovacuum_max_workers = 3\n> autovacuum_freeze_max_age = 2000000000 (changed from 200000000)\n> vacuum_freeze_min_age = 100000000\n> I have over 250 mln of frozen ids.\n> # SELECT datname, age(datfrozenxid) FROM pg_database;\n>   datname   |    age\n> ------------+-----------\n> my_database | 256938425\n> and every day (since max age exceeded 200mln.) the current table is being\n> vacuumed two hours after it was created.\n> My goal is to set the vacuum properties so the current table is not vacuumed\n> when it is used. And to vacuum it manually one day after it was used.\n> Is it enough to set\n> autovacuum=off\n> autovacuum_freeze_max_age=2000000000\n> vacuum_freeze_min_age = 100000000\n> and shedule in cron daily vacuum on selected table?\n\nHow about just disabling autovacuum for that table?\n\nhttp://www.postgresql.org/docs/current/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n\n...Robert\n", "msg_date": "Tue, 15 Sep 2009 11:44:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disable heavily updated (but small) table\n\tauto-vecuuming" } ]
[ { "msg_contents": "Hello all..\nI'm using PostgreSQL 8.3..\nHow can I get information about the hardware utilization:\n - CPU usage.\n - Disk space.\n - Memory allocation.\nthank you.\n", "msg_date": "Tue, 15 Sep 2009 02:27:41 -0700 (PDT)", "msg_from": "std pik <[email protected]>", "msg_from_op": true, "msg_subject": "statistical table" }, { "msg_contents": "--- On Tue, 9/15/09, std pik <[email protected]> wrote:\n\nFrom: std pik <[email protected]>\nSubject: [PERFORM] statistical table\nTo: [email protected]\nDate: Tuesday, September 15, 2009, 9:27 AM\n\nHello all..\nI'm using PostgreSQL 8.3..\nHow can I get information about the hardware utilization:\n        - CPU usage.\n        - Disk space.\n        - Memory allocation.\nthank you.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nin GNU/linux like debian you can use the sysstat package\n\nIng. Lennin Caro Pérez\n\nUsuario:GNU/LINUX\n\nPHP Developer\n\nPostgreSQL DBA\n\nOracle DBA\n\nLinux counter id 474393\n\n\n\n \n--- On Tue, 9/15/09, std pik <[email protected]> wrote:From: std pik <[email protected]>Subject: [PERFORM] statistical tableTo: [email protected]: Tuesday, September 15, 2009, 9:27 AMHello all..I'm using PostgreSQL 8.3..How can I get information about the hardware utilization:        - CPU usage.        - Disk space.        - Memory allocation.thank you.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your\n subscription:http://www.postgresql.org/mailpref/pgsql-performancein GNU/linux like debian you can use the sysstat packageIng. Lennin Caro Pérez\nUsuario:GNU/LINUX\nPHP Developer\nPostgreSQL DBA\nOracle DBA\nLinux counter id 474393", "msg_date": "Thu, 17 Sep 2009 06:43:06 -0700 (PDT)", "msg_from": "Lennin Caro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statistical table" }, { "msg_contents": "On Tue, Sep 15, 2009 at 02:27:41AM -0700, std pik wrote:\n> Hello all..\n> I'm using PostgreSQL 8.3..\n> How can I get information about the hardware utilization:\n> - CPU usage.\n> - Disk space.\n> - Memory allocation.\n> thank you.\n\nIn general, use the utilities provided by your operating system. There are a\nseries of functions that will tell you the size on disk of various database\nobjects. See \"Database Object Size Functions\" on this page:\nhttp://www.postgresql.org/docs/8.4/interactive/functions-admin.html\n\nFor CPU or memory usage information, use your operating system.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Thu, 17 Sep 2009 07:43:19 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statistical table" } ]
[ { "msg_contents": "\nHello,\n\nIn the same context that my previous thread on this mailing list (the\ndatabase holding 500k articles of a french daily newspaper), we now\nneed to handle the users' comments on the articles (1 million for now,\nquickly growing).\n\nIn our context, we'll have three kind of queries :\n\n- queries on articles only ;\n\n- queries on comments only ;\n\n- queries on both articles and comments.\n\nWe tried to use the partitionning feature described at\nhttp://www.postgresql.org/docs/8.4/static/ddl-partitioning.html , with three\ntables :\n\n- libeindex (master table, no data)\n\n- libearticle (articles)\n\n- libecontribution (comments)\n\nThe schema looks like :\n\nCREATE TABLE libeindex (\n\n id integer,\n classname varchar(255),\n createdAt timestamp,\n modifiedAt timestamp,\n...\n PRIMARY KEY (classname, id)\n);\n\n\nCREATE TABLE libecontribution (\n CHECK (classname = 'contribution'), \n PRIMARY KEY (classname, id)\n) INHERITS (libeindex) ;\n\nCREATE TABLE libearticle (\n CHECK (classname = 'article'), \n PRIMARY KEY (classname, id)\n) INHERITS (libeindex) ;\n\nWith many indexes are created on the two subtables, including :\nCREATE INDEX libearticle_createdAt_index ON libearticle (createdAt);\nCREATE INDEX libearticle_class_createdAt_index ON libearticle (classname, createdAt);\n\nThe problem we have is that with the partionned table, PostgreSQL is\nnow unable to use the \"index scan backwards\" query plan on a simple\n\"order by limit\" query.\n\nFor example :\n\nlibepart=> explain analyze SELECT classname, id FROM libeindex WHERE (classname IN ('article')) ORDER BY createdAt DESC LIMIT 50;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=114980.14..114980.27 rows=50 width=20) (actual time=4070.953..4071.076 rows=50 loops=1)\n -> Sort (cost=114980.14..116427.34 rows=578878 width=20) (actual time=4070.949..4070.991 rows=50 loops=1)\n Sort Key: public.libeindex.createdat\n Sort Method: top-N heapsort Memory: 28kB\n -> Result (cost=0.00..95750.23 rows=578878 width=20) (actual time=0.068..3345.727 rows=578877 loops=1)\n -> Append (cost=0.00..95750.23 rows=578878 width=20) (actual time=0.066..2338.575 rows=578877 loops=1)\n -> Index Scan using libeindex_pkey on libeindex (cost=0.00..8.27 rows=1 width=528) (actual time=0.011..0.011 rows=0 loops=1)\n Index Cond: ((classname)::text = 'article'::text)\n -> Seq Scan on libearticle libeindex (cost=0.00..95741.96 rows=578877 width=20) (actual time=0.051..1364.296 rows=578877 loops=1)\n Filter: ((classname)::text = 'article'::text)\n Total runtime: 4071.195 ms\n(11 rows)\n\nlibepart=> explain analyze SELECT classname, id FROM libearticle WHERE (classname IN ('article')) ORDER BY createdAt DESC LIMIT 50;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..9.07 rows=50 width=20) (actual time=0.033..0.200 rows=50 loops=1)\n -> Index Scan Backward using libearticle_createdat_index on libearticle (cost=0.00..105053.89 rows=578877 width=20) (actual time=0.030..0.112 rows=50 loops=1)\n Filter: ((classname)::text = 'article'::text)\n Total runtime: 0.280 ms\n(4 rows)\n\nAs you can see, PostgreSQL doesn't realize that the table \"libeindex\"\nis in fact empty, and that it only needs to query the subtable, on\nwhich it can use the \"Index Scan Backward\" query plan.\n\nIs this a known limitation of the partionning method ? If so, it could\nbe interesting to mention it on the documentation. If not, is there a\nway to work around the problem ?\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n", "msg_date": "Tue, 15 Sep 2009 14:58:15 +0200", "msg_from": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot)", "msg_from_op": true, "msg_subject": "Problem with partitionning and orderby query plans" } ]
[ { "msg_contents": "Is there a rule of thumb for the extra load that will be put on a\nsystem when statement stats are turned on?\n\nAnd if so, where does that extra load go? Disk? CPU? RAM?\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Tue, 15 Sep 2009 14:10:52 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "statement stats extra load?" }, { "msg_contents": "Alan McKay wrote:\n> Is there a rule of thumb for the extra load that will be put on a\n> system when statement stats are turned on?\n> \n> And if so, where does that extra load go? Disk? CPU? RAM?\n\nAs of 8.4.X the load isn't measurable.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 21 Sep 2009 17:19:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statement stats extra load?" }, { "msg_contents": ">> And if so, where does that extra load go?    Disk?  CPU?  RAM?\n>\n> As of 8.4.X the load isn't measurable.\n\nThanks Bruce. What about 8.3 since that is our current production DB?\n\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Mon, 21 Sep 2009 17:27:48 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: statement stats extra load?" }, { "msg_contents": "Alan McKay wrote:\n> >> And if so, where does that extra load go? ? ?Disk? ?CPU? ?RAM?\n> >\n> > As of 8.4.X the load isn't measurable.\n> \n> Thanks Bruce. What about 8.3 since that is our current production DB?\n\nSame. All statsistics settings that are enabled by default have\nnear-zero overhead. Is there a specific setting you are thinking of?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 21 Sep 2009 17:41:37 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statement stats extra load?" }, { "msg_contents": "On 21 sep 2009, at 23.41, Bruce Momjian <[email protected]> wrote:\n\n> Alan McKay wrote:\n>>>> And if so, where does that extra load go? ? ?Disk? ?CPU? ?RAM?\n>>>\n>>> As of 8.4.X the load isn't measurable.\n>>\n>> Thanks Bruce. What about 8.3 since that is our current production \n>> DB?\n>\n> Same. All statsistics settings that are enabled by default have\n> near-zero overhead. Is there a specific setting you are thinking of?\n\nThat's not true at all.\n\nIf you have many relations in your cluster that have at some point \nbeen touched, the starts collector can create a *significant* load on \nthe I/o system. I've come across several cases where the only choice \nwas to disable the collector completely, even given all the drawbacks \nfrom that.\n\n8.4 makes this *a lot* better with two new features. One enabled by \ndefault (write stats file on demand) and one you have to enable \nmanually (stats file location). Using both these together can pretty \nmuch get rid of the issue, but there's no way in 8.3.\n\n/Magnus\n\n", "msg_date": "Tue, 22 Sep 2009 08:42:53 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statement stats extra load?" }, { "msg_contents": "On Tue, Sep 22, 2009 at 2:42 AM, Magnus Hagander <[email protected]> wrote:\n> That's not true at all.\n>\n> If you have many relations in your cluster that have at some point been\n> touched, the starts collector can create a *significant* load on the I/o\n> system. I've come across several cases where the only choice was to disable\n> the collector completely, even given all the drawbacks from that.\n\nThanks Magnus, I thought that other response sounded a bit fanciful :-)\n\nSo is there any way to predict the load this will have? Or just try\nit and hope for the best? :-)\n\nRight now on our 8.3 system it is off and we'd like to turn it on\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Tue, 22 Sep 2009 09:19:44 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: statement stats extra load?" }, { "msg_contents": "On Tue, Sep 22, 2009 at 15:19, Alan McKay <[email protected]> wrote:\n> On Tue, Sep 22, 2009 at 2:42 AM, Magnus Hagander <[email protected]> wrote:\n>> That's not true at all.\n>>\n>> If you have many relations in your cluster that have at some point been\n>> touched, the starts collector can create a *significant* load on the I/o\n>> system. I've come across several cases where the only choice was to disable\n>> the collector completely, even given all the drawbacks from that.\n>\n> Thanks Magnus, I thought that other response sounded a bit fanciful :-)\n>\n> So is there any way to predict the load this will have?   Or just try\n> it and hope for the best?  :-)\n\nIIRC, the size of the statsfile will be:\n* Some header data (small)\n* For each database, not much data (IIRC about 10-15 32-bit values, so\nless than 100 bytes)\n* For each table, around 25 32-bit values, so somewhere around 100 bytes\n\nIt's the table stuff that can increase the size, unless you have very\nmany databases with just one or so tables in them. The table stats\nwill also be written for system tables.\n\nThis file will be written twice per second on 8.3 and earlier (on 8.4,\nonly on demand). It will be written as a new file and then renamed\ninto place, so there is also filesystem operations being created -\nwhich unfortunately are on your main data drive (unless, again, you're\non 8.4 and moved it to tmpfs)\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Wed, 23 Sep 2009 09:58:30 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: statement stats extra load?" } ]
[ { "msg_contents": "Hi All,\n\nI have a large quantity of temporal data, 6 billion rows, which I would\nlike to put into a table so I can exploit SQL datetime queries. Each row\nrepresents a geophysical observation at a particular time and place. The\ndata is effectively read-only - i.e. very infrequent updates will be\nperformed. The rows are very 'narrow' (~24bytes of data per row).\n\nWhen I ingest each data into PostgreSQL a row at a time I discovered\nthat the row over-head is significant (pg 8.3.7). The projected\nresources required to host this table prohibit this simple approach.\n\nIn order to reduce the cost of the row over head, I tried storing a\nwhole minutes worth of data in an array, and now I only require one row\nper minute. Total rows decreased by 60, resources required became\nrealistic.\n\nMy schema is thus:\n\nCREATE TABLE geodata1sec (obstime TIMESTAMP WITHOUT TIME ZONE NOT NULL,\nstatid SMALLINT NOT NULL, geovalue_array REAL[3][60] NOT NULL);\nand after ingesting, I add these indexes:\nALTER TABLE geodata1sec ADD PRIMARY KEY (obstime, statid);\nCREATE INDEX geodata1sec_statid_idx ON geodata1sec (statid);\n\nStoring whole minutes in a row with the data in an array has the desired\neffect of making the table size on disk, and index size in memory,\nmanageable. However, my queries now need to be sensitive that I've made\nthis schema design decision. The following query runs nice and quick but\nobviously doesn't return all the relevant results (because second\nresolution is specified):\n\nEXPLAIN ANALYZE SELECT * FROM geodata1sec WHERE obstime BETWEEN\n'2004-10-21 02:03:04' AND '2004-10-21 02:04:08';\nQUERY\nPLAN \n--------------------------------------------------------------------------\n Index Scan using geodata1sec_pkey on geodata1sec (cost=0.00..38.19\nrows=12 width=762) (actual time=0.071..0.148 rows=13 loops=1)\n Index Cond: ((obstime >= '2004-10-21 02:03:04'::timestamp without\ntime zone) AND (obstime <= '2004-10-21 02:04:08'::timestamp without time\nzone))\nTotal runtime: 0.292 ms\n(3 rows)\n\n\n... So, I constructed a view which would present my data as I originally\nintended. This also means that I don't have to give my applications\ndetailed knowledge of the schema. The view is:\n\n\nCREATE VIEW geodataview AS SELECT obstime + (s.a*5 || '\nseconds')::INTERVAL AS obstime, statid, geovalue_array[s.a+1][1] AS\nx_mag, geovalue_array[s.a+1][2] AS y_mag, geovalue_array[s.a+1][3] AS\nz_mag FROM generate_series(0, 11) AS s(a), geodata1sec;\n\nSo my query returns _all_ the relevant data. However, this query takes a\nlong time. If I analyse the query I get: \n\nEXPLAIN ANALYZE SELECT * FROM geodataview WHERE obstime BETWEEN\n'2004-10-21 02:03:04' AND '2004-10-21 02:04:08';\nQUERY PLAN \n--------------------------------------------------------------------------\n Nested Loop (cost=13.50..2314276295.50 rows=4088000000 width=766)\n(actual time=2072612.668..3081010.104 rows=169 loops=1)\n Join Filter: (((geodata1sec.obstime + ((((s.a * 5))::text || '\nseconds'::text))::interval) >= '2004-10-21 02:03:04'::timestamp without\ntime zone) AND ((geodata1sec.obstime + ((((s.a * 5))::text || '\nseconds'::text))::interval) <= '2004-10-21 02:04:08'::timestamp without\ntime zone))\n -> Seq Scan on geodata1sec (cost=0.00..4556282.00 rows=36792000\nwidth=762) (actual time=17.072..414620.213 rows=36791999 loops=1)\n -> Materialize (cost=13.50..23.50 rows=1000 width=4) (actual\ntime=0.002..0.027 rows=12 loops=36791999)\n -> Function Scan on generate_series s (cost=0.00..12.50\nrows=1000 width=4) (actual time=0.075..0.102 rows=12 loops=1)\nTotal runtime: 3081010.613 ms\n(6 rows)\n\n\nThis is clearly not going to perform for any practical applications.\nHowever, it struck me that others might have needed similar\nfunctionality for time data so I thought I would air my experience here.\n\nIs it feasible to modify the query planner to make better decisions when\ndealing with time data behind a view?\n\nAre there any alternatives to vanilla Postgresql for storing this type\nof data? I'm imagining PostGIS but for time based data?\n\n\nYour time and thoughts are appreciated,\nCheers,\nRichard\n-- \nScanned by iCritical.\n", "msg_date": "Thu, 17 Sep 2009 13:55:07 +0100", "msg_from": "Richard Henwood <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing for temporal data behind a view" }, { "msg_contents": "Hi Richard,\n\n> CREATE VIEW geodataview AS SELECT obstime + (s.a*5 || '\n> seconds')::INTERVAL AS obstime, statid, geovalue_array[s.a+1][1] AS\n> x_mag, geovalue_array[s.a+1][2] AS y_mag, geovalue_array[s.a+1][3] AS\n> z_mag FROM generate_series(0, 11) AS s(a), geodata1sec;\n\nTo my (admittedly untrained) eye, it seems that the JOIN that will\nimplicitly happen (generate_series(0,11) and geodata1sec) will be over\nall records in geodata1sec, and the explain analyze of the view you\nposted seems to corroborate that. (I suspect that the JOIN also kills\nthe time filter for geodata1sec, which would worsen things.)\n\n> EXPLAIN ANALYZE SELECT * FROM geodataview WHERE obstime BETWEEN\n> '2004-10-21 02:03:04' AND '2004-10-21 02:04:08';\n\n>  Nested Loop  (cost=13.50..2314276295.50 rows=4088000000 width=766)\n> (actual time=2072612.668..3081010.104 rows=169 loops=1)\n>   Join Filter: (((geodata1sec.obstime + ((((s.a * 5))::text || '\n> seconds'::text))::interval) >= '2004-10-21 02:03:04'::timestamp without\n> time zone) AND ((geodata1sec.obstime + ((((s.a * 5))::text || '\n> seconds'::text))::interval) <= '2004-10-21 02:04:08'::timestamp without\n> time zone))\n>   ->  Seq Scan on geodata1sec  (cost=0.00..4556282.00 rows=36792000\n> width=762) (actual time=17.072..414620.213 rows=36791999 loops=1)\n\nThe seqscan should return only 12 rows (as per your original explain\nanalyze output), but actually returns 37 million.\n\n> This is clearly not going to perform for any practical applications.\n> However, it struck me that others might have needed similar\n> functionality for time data so I thought I would air my experience here.\n>\n> Is it feasible to modify the query planner to make better decisions when\n> dealing with time data behind a view?\n\nYou could use table partitioning and split your geodata1sec table into\n(say) one table per hour, which can then hold a lot fewer records to\nJOIN with. (with PG 8.3.7 you need to explicitly enable\nconstraint_exclusion in the config file for this to work).\n\nYou could change the view to be a stored proc instead, but I'm\nguessing you don't want to (or cannot) change the application which\nmakes the query.\n\nYou could also change the view to call a stored procedure that does, in essence,\nfor i in (0..11); do { query geodata1sec for t+i; } and return the\nresulting recordset, which might be faster.\n\nIf you're dealing with mostly invariant-after-insert data, you can use\npartitioning then CLUSTER any tables that won't be touched on an\nappropriate column so the seqscan (if there is one) is faster, and\nvacuum analyze the table once it's clustered.\n\n> Are there any alternatives to vanilla Postgresql for storing this type\n> of data? I'm imagining PostGIS but for time based data?\n\nI recently had to deal with something similar (though not on your\nscale) for network monitoring - the thread is available at\n http://archives.postgresql.org/pgsql-performance/2009-08/msg00275.php\n\n\nCheers,\nHrishi\n", "msg_date": "Thu, 17 Sep 2009 07:54:39 -0700", "msg_from": "\n =?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=\n\t=?UTF-8?B?4KWHKQ==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing for temporal data behind a view" } ]
[ { "msg_contents": "Hi,\n\nIs there any downsides of using BETWEEN two identical values ?\n\n(Postgres 8.3.6, Debian Linux 2.6.18-6-amd64)\n\n\nThe problem is in this index:\nCREATE INDEX ibds_contratacao_fatura1\n ON bds_contratacao_fatura USING btree (fat_referencia);\n\n\"fat_referencia\" is a field that tells year and month, and the \nconditions are generated by an external application.\nThis field is not PK, and repeats only 29 times:\n# select count(distinct(fat_referencia)) from bds_contratacao_fatura;\n count\n-------\n 29\n\nBelow is the result of explians, and the index conditions:\n\nCondition 1:\n# select fat_referencia from bds_contratacao_fatura where fat_referencia \nBETWEEN 200908 AND 200908;\n Index Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura \n(cost=0.00..5.64 rows=1 width=4) (actual time=0.023..79.952 rows=163689 \nloops=1)\n Index Cond: ((fat_referencia >= 200908) AND (fat_referencia <= 200908))\n Total runtime: 110.470 ms\n\nCondition 2:\n# select fat_referencia from bds_contratacao_fatura where fat_referencia \nBETWEEN 200906 AND 200908;\nIndex Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura \n(cost=0.00..14773.88 rows=414113 width=4) (actual time=8.450..653.882 \nrows=496723 loops=1)\n Index Cond: ((fat_referencia >= 200906) AND (fat_referencia <= 200908))\n Total runtime: 748.314 ms\n\nCondition 3:\n# select fat_referencia from bds_contratacao_fatura where fat_referencia \n= 200908;\nIndex Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura \n(cost=0.00..4745.07 rows=142940 width=4) (actual time=0.022..77.818 \nrows=163689 loops=1)\n Index Cond: (fat_referencia = 200908)\n Total runtime: 108.292 ms\n\n\n\nI expect Postgres would give me the same plan in conditions 1 and 3.\nIn condition 2, the plan seems ok and well estimated.\nThe solution per now is change the application to use \"BETWEEN\" olny \nwhen year and month are not the same.\n\nHow can condition 1 be so badly estimated?\n\n-- \n\n[]´s,\n\nAndré Volpato\n\n\n\n", "msg_date": "Thu, 17 Sep 2009 11:53:43 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Use of BETWEEN with identical values" }, { "msg_contents": "André Volpato escreveu:\n> (...)\n>\n> (Postgres 8.3.6, Debian Linux 2.6.18-6-amd64)\n>\n> (...)\n\n> Condition 1:\n> # select fat_referencia from bds_contratacao_fatura where \n> fat_referencia BETWEEN 200908 AND 200908;\n> Index Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura \n> (cost=0.00..5.64 rows=1 width=4) (actual time=0.023..79.952 \n> rows=163689 loops=1)\n> Index Cond: ((fat_referencia >= 200908) AND (fat_referencia <= 200908))\n> Total runtime: 110.470 ms\n\n> Condition 3:\n> # select fat_referencia from bds_contratacao_fatura where \n> fat_referencia = 200908;\n> Index Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura \n> (cost=0.00..4745.07 rows=142940 width=4) (actual time=0.022..77.818 \n> rows=163689 loops=1)\n> Index Cond: (fat_referencia = 200908)\n> Total runtime: 108.292 ms\n>\n> I expect Postgres would give me the same plan in conditions 1 and 3.\n\nAnd also the core team...\n\nThis behaviour is 8.3 related. In 8.4, conditions 1 and 3 results in the \nsame plan.\n\n\n-- \n\n[]´s,\n\nAndré Volpato\n\n\n", "msg_date": "Thu, 17 Sep 2009 18:02:08 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use of BETWEEN with identical values" }, { "msg_contents": "On Thu, Sep 17, 2009 at 5:02 PM, André Volpato\n<[email protected]> wrote:\n> André Volpato escreveu:\n>>\n>> (...)\n>>\n>> (Postgres 8.3.6, Debian Linux 2.6.18-6-amd64)\n>>\n>> (...)\n>\n>> Condition 1:\n>> # select fat_referencia from bds_contratacao_fatura where fat_referencia\n>> BETWEEN 200908 AND 200908;\n>> Index Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura\n>>  (cost=0.00..5.64 rows=1 width=4) (actual time=0.023..79.952 rows=163689\n>> loops=1)\n>>  Index Cond: ((fat_referencia >= 200908) AND (fat_referencia <= 200908))\n>> Total runtime: 110.470 ms\n>\n>> Condition 3:\n>> # select fat_referencia from bds_contratacao_fatura where fat_referencia =\n>> 200908;\n>> Index Scan using ibds_contratacao_fatura1 on bds_contratacao_fatura\n>>  (cost=0.00..4745.07 rows=142940 width=4) (actual time=0.022..77.818\n>> rows=163689 loops=1)\n>>  Index Cond: (fat_referencia = 200908)\n>> Total runtime: 108.292 ms\n>>\n>> I expect Postgres would give me the same plan in conditions 1 and 3.\n>\n> And also the core team...\n>\n> This behaviour is 8.3 related. In 8.4, conditions 1 and 3 results in the\n> same plan.\n\nHmm. I don't see anything in the release notes about it, but it's not\nsurprising that the optimizer would be improved in a newer version.\n\n...Robert\n", "msg_date": "Fri, 18 Sep 2009 10:01:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of BETWEEN with identical values" } ]
[ { "msg_contents": "Hi all,\n\non our PostgreSQL 8.3.1 (CentOS 5.3 64-bit) two different query plans\nfor one of our (weird) queries are generated. One of the query plans\nseems to be good (and is used most of the time). The other one is bad -\nthe query takes about 2 minutes and the database process, which is\nexecuting the query, is cpu bound during this time.\n\nAfter several tries I was able to reproduce the problem when executing\nthe query with EXPLAIN ANALYZE. The bad query plan was generated only\nseconds after the good one was used when executing the query. What's the\nreasond for the different query plans? Statistics are up to date.\n\nGood:\n\nEXPLAIN ANALYZE SELECT DISTINCT t6.objid\n FROM ataggval q1_1,\n atobjval t6,\n atobjval t7,\n atobjval t8,\n atobjval t9,\n cooobject t10\n WHERE q1_1.objid = t6.objid AND\n q1_1.attrid = 285774255985993 AND\n q1_1.aggrid = 0 AND\n t6.aggrid = q1_1.aggval AND\n t7.aggrid = 0 AND\n t7.objid = t6.objid AND\n t8.aggrid = 0 AND\n t8.objid = t6.objid AND\n t9.aggrid = 0 AND\n t9.objid = t6.objid AND\n t10.objid = t6.objid AND\n t6.objval = 285774255985589 AND\n t6.attrid=285774255985991 AND\n t7.objval = 625445988202446985 AND\n t7.attrid=285774255985855 AND\n t8.objval = 625445988286355913 AND\n t8.attrid=285774255985935 AND\n t9.objval = 625445988269570350 AND\n t9.attrid=285774255985938 AND\n t10.objclassid = 285774255985894 ORDER BY t6.objid;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------\n Unique (cost=66.58..66.59 rows=1 width=8) (actual\ntime=1548.207..1548.208 rows=1 loops=1)\n -> Sort (cost=66.58..66.58 rows=1 width=8) (actual\ntime=1548.206..1548.207 rows=1 loops=1)\n Sort Key: t6.objid\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..66.57 rows=1 width=8) (actual\ntime=1044.759..1548.190 rows=1 loops=1)\n Join Filter: (t6.objid = t7.objid)\n -> Nested Loop (cost=0.00..54.52 rows=1 width=40)\n(actual time=21.938..1541.633 rows=350 loops=1)\n Join Filter: (t6.objid = t8.objid)\n -> Nested Loop (cost=0.00..42.47 rows=1 width=32)\n(actual time=21.907..1422.710 rows=364 loops=1)\n Join Filter: (t6.objid = t9.objid)\n -> Nested Loop (cost=0.00..30.42 rows=1\nwidth=24) (actual time=0.151..920.873 rows=775 loops=1)\n -> Nested Loop (cost=0.00..21.97\nrows=1 width=16) (actual time=0.065..915.387 rows=775 loops=1)\n Join Filter: (q1_1.objid =\nt6.objid)\n -> Index Scan using ind_atobjval\non atobjval t6 (cost=0.00..12.04 rows=1 width=12) (actual\ntime=0.031..0.863 rows=775 loops=1)\n Index Cond: ((attrid =\n285774255985991::bigint) AND (objval = 285774255985589::bigint))\n -> Index Scan using ind_ataggval\non ataggval q1_1 (cost=0.00..9.92 rows=1 width=12) (actual\ntime=0.006..0.897 rows=1243 loops=775)\n Index Cond: ((q1_1.attrid =\n285774255985993::bigint) AND (q1_1.aggval = t6.aggrid))\n Filter: (q1_1.aggrid = 0)\n -> Index Scan using cooobjectix on\ncooobject t10 (cost=0.00..8.44 rows=1 width=8) (actual\ntime=0.005..0.006 rows=1 loops=775)\n Index Cond: (t10.objid =\nt6.objid)\n Filter: (t10.objclassid =\n285774255985894::bigint)\n -> Index Scan using ind_atobjval on atobjval\nt9 (cost=0.00..12.04 rows=1 width=8) (actual time=0.007..0.490 rows=694\nloops=775)\n Index Cond: ((t9.attrid =\n285774255985938::bigint) AND (t9.objval = 625445988269570350::bigint))\n Filter: (t9.aggrid = 0)\n -> Index Scan using ind_atobjval on atobjval t8\n(cost=0.00..12.04 rows=1 width=8) (actual time=0.007..0.248 rows=350\nloops=364)\n Index Cond: ((t8.attrid =\n285774255985935::bigint) AND (t8.objval = 625445988286355913::bigint))\n Filter: (t8.aggrid = 0)\n -> Index Scan using ind_atobjval on atobjval t7\n(cost=0.00..12.04 rows=1 width=8) (actual time=0.005..0.015 rows=13\nloops=350)\n Index Cond: ((t7.attrid = 285774255985855::bigint)\nAND (t7.objval = 625445988202446985::bigint))\n Filter: (t7.aggrid = 0)\n Total runtime: 1548.339 ms\n(31 rows)\n\n\nBad: \nEXPLAIN ANALYZE SELECT DISTINCT t6.objid\n FROM ataggval q1_1,\n atobjval t6,\n atobjval t7,\n atobjval t8,\n atobjval t9,\n cooobject t10\n WHERE q1_1.objid = t6.objid AND\n q1_1.attrid = 285774255985993 AND\n q1_1.aggrid = 0 AND\n t6.aggrid = q1_1.aggval AND\n t7.aggrid = 0 AND\n t7.objid = t6.objid AND\n t8.aggrid = 0 AND\n t8.objid = t6.objid AND\n t9.aggrid = 0 AND\n t9.objid = t6.objid AND\n t10.objid = t6.objid AND\n t6.objval = 285774255985589 AND\n t6.attrid=285774255985991 AND\n t7.objval = 625445988202446985 AND\n t7.attrid=285774255985855 AND\n t8.objval = 625445988286355913 AND\n t8.attrid=285774255985935 AND\n t9.objval = 625445988269570350 AND\n t9.attrid=285774255985938 AND\n t10.objclassid = 285774255985894 ORDER BY t6.objid;\n\n\n\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------\n Unique (cost=66.58..66.59 rows=1 width=8) (actual\ntime=172984.132..172984.133 rows=1 loops=1)\n -> Sort (cost=66.58..66.59 rows=1 width=8) (actual\ntime=172984.129..172984.129 rows=1 loops=1)\n Sort Key: t6.objid\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..66.57 rows=1 width=8) (actual\ntime=118105.762..172984.109 rows=1 loops=1)\n Join Filter: (t6.objid = t7.objid)\n -> Nested Loop (cost=0.00..54.52 rows=1 width=40)\n(actual time=2362.708..172976.313 rows=350 loops=1)\n Join Filter: (t6.objid = q1_1.objid)\n -> Nested Loop (cost=0.00..44.59 rows=1 width=36)\n(actual time=2362.628..172487.721 rows=350 loops=1)\n Join Filter: (t6.objid = t8.objid)\n -> Nested Loop (cost=0.00..20.49 rows=1\nwidth=20) (actual time=0.054..7.144 rows=775 loops=1)\n -> Index Scan using ind_atobjval on\natobjval t6 (cost=0.00..12.04 rows=1 width=12) (actual\ntime=0.032..0.953 rows=775 loops=1)\n Index Cond: ((attrid =\n285774255985991::bigint) AND (objval = 285774255985589::bigint))\n -> Index Scan using cooobjectix on\ncooobject t10 (cost=0.00..8.44 rows=1 width=8) (actual\ntime=0.006..0.007 rows=1 loops=775)\n Index Cond: (t10.objid =\nt6.objid)\n Filter: (t10.objclassid =\n285774255985894::bigint)\n -> Nested Loop (cost=0.00..24.09 rows=1\nwidth=16) (actual time=0.019..222.445 rows=350 loops=775)\n Join Filter: (t8.objid = t9.objid)\n -> Index Scan using ind_atobjval on\natobjval t8 (cost=0.00..12.04 rows=1 width=8) (actual time=0.009..0.296\nrows=350 loops=775)\n Index Cond: ((attrid =\n285774255985935::bigint) AND (objval = 625445988286355913::bigint))\n Filter: (aggrid = 0)\n -> Index Scan using ind_atobjval on\natobjval t9 (cost=0.00..12.04 rows=1 width=8) (actual time=0.009..0.475\nrows=694 loops=271250)\n Index Cond: ((t9.attrid =\n285774255985938::bigint) AND (t9.objval = 625445988269570350::bigint))\n Filter: (t9.aggrid = 0)\n -> Index Scan using ind_ataggval on ataggval q1_1\n(cost=0.00..9.92 rows=1 width=12) (actual time=0.009..1.114 rows=1248\nloops=350)\n Index Cond: ((q1_1.attrid =\n285774255985993::bigint) AND (q1_1.aggval = t6.aggrid))\n Filter: (q1_1.aggrid = 0)\n -> Index Scan using ind_atobjval on atobjval t7\n(cost=0.00..12.04 rows=1 width=8) (actual time=0.007..0.018 rows=13\nloops=350)\n Index Cond: ((t7.attrid = 285774255985855::bigint)\nAND (t7.objval = 625445988202446985::bigint))\n Filter: (t7.aggrid = 0)\n Total runtime: 172984.235 ms\n(31 rows)\n\nThanks\nRobert\n\n\n", "msg_date": "Fri, 18 Sep 2009 09:40:53 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Different query plans for the same query" }, { "msg_contents": "> Hi all,\n>\n> on our PostgreSQL 8.3.1 (CentOS 5.3 64-bit) two different query plans\n> for one of our (weird) queries are generated. One of the query plans\n> seems to be good (and is used most of the time). The other one is bad -\n> the query takes about 2 minutes and the database process, which is\n> executing the query, is cpu bound during this time.\n>\n> After several tries I was able to reproduce the problem when executing\n> the query with EXPLAIN ANALYZE. The bad query plan was generated only\n> seconds after the good one was used when executing the query. What's the\n> reasond for the different query plans? Statistics are up to date.\n>\n> ...\n\nHi,\n\nplease, when posting an explain plan, either save it into a file and\nprovide a URL (attachments are not allowed here), or use\nexplain.depesz.com or something like that. This wrapping makes the plan\nunreadable so it's much more difficult to help you.\n\nI've used the explain.depesz.com (this time):\n\n- good plan: http://explain.depesz.com/s/HX\n- bad plan: http://explain.depesz.com/s/gcr\n\nIt seems the whole problem is caused by the 'Index Scan using ind_atobjval\non atobjval t9' - in the first case it's executed only 775x, but in the\nsecond case it's moved to the nested loop (one level deeper) and suddenly\nit's executed 271250x. And that causes the huge increase in cost.\n\nWhy is this happening? I'm not sure, but I'm not quite sure the statistics\nare up to data and precise enough - some of the steps state 'rows=1'\nestimate, but 'rows=775' in the actual results.\n\nHave you tried to increase target on the tables? That might provide more\naccurate stats, thus better estimates.\n\nregards\nTomas\n\n", "msg_date": "Fri, 18 Sep 2009 11:19:59 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Different query plans for the same query" }, { "msg_contents": "Hi,\n\nsorry about that.\n\nWe use 100 as default_statistics_target for this database. The default should be 10 here - statistics are up to date I executed analyze manually this morning.\n\nAs mentioned before, the \"bad plan\" only happens once or twice a day - so the reproduction of that plan is very difficult. \n\nI now played a little bit with statistics target for those three tables (alter table ...set statistics). It seems that there is a better query plan than the good one when using 10 as statistics target.\n\nbad plan (sometimes with statistcs target 100, seconds after the good plan was chosen) - about 2 minutes: http://explain.depesz.com/s/gcr\ngood plan (most of the time with statistcs target 100) - about one second: http://explain.depesz.com/s/HX\nvery good plan (with statistics target 10) - about 15 ms: http://explain.depesz.com/s/qMc\n\nWhat's the reason for that? I always thought increasing default statistics target should make statistics (and query plans) better.\n\nRegards,\nRobert\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] [mailto:[email protected]] \nGesendet: Freitag, 18. September 2009 11:20\nAn: Hell, Robert\nCc: [email protected]\nBetreff: Re: [PERFORM] Different query plans for the same query\n\n> Hi all,\n>\n> on our PostgreSQL 8.3.1 (CentOS 5.3 64-bit) two different query plans\n> for one of our (weird) queries are generated. One of the query plans\n> seems to be good (and is used most of the time). The other one is bad -\n> the query takes about 2 minutes and the database process, which is\n> executing the query, is cpu bound during this time.\n>\n> After several tries I was able to reproduce the problem when executing\n> the query with EXPLAIN ANALYZE. The bad query plan was generated only\n> seconds after the good one was used when executing the query. What's the\n> reasond for the different query plans? Statistics are up to date.\n>\n> ...\n\nHi,\n\nplease, when posting an explain plan, either save it into a file and\nprovide a URL (attachments are not allowed here), or use\nexplain.depesz.com or something like that. This wrapping makes the plan\nunreadable so it's much more difficult to help you.\n\nI've used the explain.depesz.com (this time):\n\n- good plan: http://explain.depesz.com/s/HX\n- bad plan: http://explain.depesz.com/s/gcr\n\nIt seems the whole problem is caused by the 'Index Scan using ind_atobjval\non atobjval t9' - in the first case it's executed only 775x, but in the\nsecond case it's moved to the nested loop (one level deeper) and suddenly\nit's executed 271250x. And that causes the huge increase in cost.\n\nWhy is this happening? I'm not sure, but I'm not quite sure the statistics\nare up to data and precise enough - some of the steps state 'rows=1'\nestimate, but 'rows=775' in the actual results.\n\nHave you tried to increase target on the tables? That might provide more\naccurate stats, thus better estimates.\n\nregards\nTomas\n\n", "msg_date": "Fri, 18 Sep 2009 11:50:08 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different query plans for the same query" }, { "msg_contents": "\"Hell, Robert\" <[email protected]> writes:\n> bad plan (sometimes with statistcs target 100, seconds after the good plan was chosen) - about 2 minutes: http://explain.depesz.com/s/gcr\n> good plan (most of the time with statistcs target 100) - about one second: http://explain.depesz.com/s/HX\n> very good plan (with statistics target 10) - about 15 ms: http://explain.depesz.com/s/qMc\n\n> What's the reason for that?\n\nGarbage in, garbage out :-(. When you've got rowcount estimates that\nare off by a couple orders of magnitude, it's unsurprising that you get\nbad plan choices. In this case it appears that the \"bad\" and \"good\"\nplans have just about the same estimated cost. I'm guessing that the\nunderlying statistics change a bit due to autovacuum activity, causing\nthe plan choice to flip unexpectedly.\n\nThe real fix would be to get the rowcount estimates more in line with\nreality. I think the main problem is that in cases like\n\n -> Index Scan using ind_atobjval on atobjval t6 (cost=0.00..12.04 rows=1 width=12) (actual time=0.032..0.953 rows=775 loops=1)\n Index Cond: ((attrid = 285774255985991::bigint) AND (objval = 285774255985589::bigint))\n\nthe planner is supposing that the two conditions are independent when\nthey are not. Is there any way you can refactor the data representation\nto remove the hidden redundancy?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Sep 2009 11:42:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different query plans for the same query " }, { "msg_contents": "2009/9/18 <[email protected]>:\n>> Hi all,\n>>\n>> on our PostgreSQL 8.3.1 (CentOS 5.3 64-bit) two different query plans\n>> for one of our (weird) queries are generated. One of the query plans\n>> seems to be good (and is used most of the time). The other one is bad -\n>> the query takes about 2 minutes and the database process, which is\n>> executing the query, is cpu bound during this time.\n>>\n>> After several tries I was able to reproduce the problem when executing\n>> the query with EXPLAIN ANALYZE. The bad query plan was generated only\n>> seconds after the good one was used when executing the query. What's the\n>> reasond for the different query plans? Statistics are up to date.\n>>\n>> ...\n>\n> Hi,\n>\n> please, when posting an explain plan, either save it into a file and\n> provide a URL (attachments are not allowed here), or use\n> explain.depesz.com or something like that. This wrapping makes the plan\n> unreadable so it's much more difficult to help you.\n\nUh, since when are attachments not allowed here? I completely agree\nthat line-wrapping is BAD, but I don't agree that pastebin is good. I\nwould much rather have the relevant material in the email, or in an\nattachment, than in some other web site that may or may not format it\nreadably, may or may not be easy to cut and paste, and will definitely\nnot become part of our archives.\n\n...Robert\n", "msg_date": "Fri, 18 Sep 2009 13:50:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different query plans for the same query" }, { "msg_contents": "Hi Tom,\n\nit would be really hard for us to change the underlying tables and the executed query. Is there any other way for us to avoid the really bad query (e.g. a hint for the planner)?\n\nRegards,\nRobert Hell\n\n-----Ursprüngliche Nachricht-----\nVon: Tom Lane [mailto:[email protected]] \nGesendet: Freitag, 18. September 2009 17:43\nAn: Hell, Robert\nCc: [email protected]; [email protected]\nBetreff: Re: [PERFORM] Different query plans for the same query \n\n\"Hell, Robert\" <[email protected]> writes:\n> bad plan (sometimes with statistcs target 100, seconds after the good plan was chosen) - about 2 minutes: http://explain.depesz.com/s/gcr\n> good plan (most of the time with statistcs target 100) - about one second: http://explain.depesz.com/s/HX\n> very good plan (with statistics target 10) - about 15 ms: http://explain.depesz.com/s/qMc\n\n> What's the reason for that?\n\nGarbage in, garbage out :-(. When you've got rowcount estimates that\nare off by a couple orders of magnitude, it's unsurprising that you get\nbad plan choices. In this case it appears that the \"bad\" and \"good\"\nplans have just about the same estimated cost. I'm guessing that the\nunderlying statistics change a bit due to autovacuum activity, causing\nthe plan choice to flip unexpectedly.\n\nThe real fix would be to get the rowcount estimates more in line with\nreality. I think the main problem is that in cases like\n\n -> Index Scan using ind_atobjval on atobjval t6 (cost=0.00..12.04 rows=1 width=12) (actual time=0.032..0.953 rows=775 loops=1)\n Index Cond: ((attrid = 285774255985991::bigint) AND (objval = 285774255985589::bigint))\n\nthe planner is supposing that the two conditions are independent when\nthey are not. Is there any way you can refactor the data representation\nto remove the hidden redundancy?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Sep 2009 08:41:22 +0200", "msg_from": "\"Hell, Robert\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different query plans for the same query " } ]
[ { "msg_contents": "Hi all,\n\nWe're using Postgresql 8.3.7 on Debian. We are seeing a very strange performance situation with our application which I am \nhoping that someone can shed light on.\n\nOur tests find that our application runs quite well on 8.3.7 initially. The test consists of database creation followed by 30 \ncycles of creation and removal of approximately 1,000,000 rows (across all tables) per cycle. However, when database \nmaintenance takes place (which consists of a VACUUM FULL operation, and some table REINDEX operations), subsequent cycle \nperformance is more than 2x worse. What's more, after one VACUUM FULL operation has been done on the database, no subsequent \nVACUUM FULL operations *ever* seem to restore it to proper performance levels.\n\nWe used the same general maintenance procedure with 8.2 and found that it worked as expected, so we were quite surprised to \ndiscover this problem with 8.3.7. Anybody know what's going on?\n\nThanks,\nKarl\n\n-- \nKarl Wright\nSoftware Engineer\n\nMetaCarta, Inc.\n350 Massachusetts Avenue, 4th Floor, Cambridge, MA 02139 USA\n\n(617)-301-5511\n\nwww.metacarta.com <http://www.metacarta.com>\nWhere to find it.\n\nThis message may contain privileged, proprietary, and otherwise private\ninformation. If you are not the intended recipient, please notify the\nsender immediately.\n\n", "msg_date": "Fri, 18 Sep 2009 08:44:05 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Database performance post-VACUUM FULL" }, { "msg_contents": "On Fri, Sep 18, 2009 at 8:44 AM, Karl Wright <[email protected]> wrote:\n> Hi all,\n>\n> We're using Postgresql 8.3.7 on Debian.  We are seeing a very strange\n> performance situation with our application which I am hoping that someone\n> can shed light on.\n>\n> Our tests find that our application runs quite well on 8.3.7 initially.  The\n> test consists of database creation followed by 30 cycles of creation and\n> removal of approximately 1,000,000 rows (across all tables) per cycle.\n>  However, when database maintenance takes place (which consists of a VACUUM\n> FULL operation, and some table REINDEX operations), subsequent cycle\n> performance is more than 2x worse.  What's more, after one VACUUM FULL\n> operation has been done on the database, no subsequent VACUUM FULL\n> operations *ever* seem to restore it to proper performance levels.\n>\n> We used the same general maintenance procedure with 8.2 and found that it\n> worked as expected, so we were quite surprised to discover this problem with\n> 8.3.7.  Anybody know what's going on?\n\nCan you post to the list all the uncommented settings from your\npostgresql.conf, the output of VACUUM VERBOSE, and the output of\nEXPLAIN ANALYZE for some representative queries?\n\n...Robert\n", "msg_date": "Fri, 18 Sep 2009 09:40:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database performance post-VACUUM FULL" }, { "msg_contents": "Karl Wright <[email protected]> wrote:\n \n> when database maintenance takes place (which consists of a VACUUM\n> FULL operation, and some table REINDEX operations)\n \nBesides providing the information requested by Robert, can you explain\nwhy you chose to use VACUUM FULL? The FULL option is only really\nuseful in a small set of unusual use cases for recovery from serious\nproblems. In most cases it will do more harm than good. If\nautovacuum isn't covering your need by itself, a VACUUM of the\ndatabase, usually with the ANALYZE option and *possibly* with the\nFREEZE option, is almost always adequate, without resorting to the\npain of VACUUM FULL.\n \nIf you've run VACUUM FULL without a REINDEX of *all* indexes *after*\nthe VACUUM FULL, you've probably seriously bloated your indexes. You\nmay also have shuffled around the rows to positions where you're doing\nmore random access than before. CLUSTER would be one way to fix both\nproblems, although if you've bloated your system tables you might be\nbest off recreating your database with the output from pg_dump.. But\nyou might want to provide the information Robert requested to confirm\nthe nature of the problem before attempting to fix it....\n \n-Kevin\n", "msg_date": "Thu, 01 Oct 2009 08:25:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database performance post-VACUUM FULL" } ]
[ { "msg_contents": "Hi everybody.\n\nI'm having an issues with wrong plan for query in PostgreSQL (version \n8.3). EXPLAIN ANALYZE shows that there're a lot of places where \nplanner estimates row count totally wrong, like 1 instead of 12000+.\ndefault_statistics_target variable is set to 100, and I tried to run \nVACUUM ANALYZE many times.\n\nBecause of wrong estimation query planner uses nested loops instead of \nhash joins and it results in very bad performance. Disabling nested \nloops helps, but I want to understand what happens there and try to \navoid it in future.\n\nCould you help me with it? Query and plan are below.\n\nThank you in advance,\n\nMichael Korbakov\n\nSELECT wide_stats.*, revenue * rev_share AS net_revenue, revenue * \n(rev_share - partner_rev_share) AS gross_revenue,\n (CASE clicks WHEN 0 THEN 0 ELSE revenue * rev_share / \nclicks END) as net_rpc, (CASE clicks WHEN 0 THEN 0 ELSE revenue * \n(rev_share - partner_rev_share) / clicks END) AS gross_rpc,\n rev_share * wide_stats.ecpm AS net_ecpm, (rev_share - \npartner_rev_share) * wide_stats.ecpm AS gross_ecpm,\n partner_rev_share * revenue AS partner_revenue\n FROM\n\n (SELECT stats.id, stats.date, stats.domain_id, (CASE WHEN \nstats.partner_id = 1 OR top_subparent = 1 THEN title ELSE real_title \nEND) AS title,\n stats.pageviews, stats.subsequent_searches, \nstats.searches, stats.clicks, stats.revenue, stats.country, \nstats.approved,\n stats.partner_id, shares.rev_share, (CASE top_subparent \nWHEN 1 THEN 0 ELSE partners_shares.rev_share END) AS \npartner_rev_share, stats.ctr, stats.rpc,\n (CASE stats.searches WHEN 0 THEN 0 ELSE 1000 * revenue / \nsearches END) AS ecpm,\n (SELECT name FROM partners WHERE id = top_subparent) AS \nowner,\n subparents.top_subparent\n FROM reports.daily_domain_reports AS stats\n LEFT JOIN materialized_top_subparents AS subparents ON \nstats.partner_id = subparents.partner_id AND subparents.parent_id = 1\n LEFT JOIN reports.monthly_shares_with_parents_materialized AS \nshares ON date_part('year'::text, stats.date) = shares.year AND \ndate_part('month'::text, stats.date) = shares.month AND \nshares.partner_id = 1\n LEFT JOIN reports.monthly_shares_with_parents_materialized AS \npartners_shares ON date_part('year'::text, stats.date) = \npartners_shares.year AND date_part('month'::text, stats.date) = \npartners_shares.month AND partners_shares.partner_id = top_subparent\n WHERE stats.partner_id = 1 OR top_subparent IN (SELECT \npartners.id FROM partners WHERE parent_id = 1)\n\n) AS wide_stats WHERE date >= '2009-08-01' AND date < '2009-09-02';\n\n\nNested Loop (cost=11.80..172.48 rows=1 width=94) (actual \ntime=93.792..14485.092 rows=12745 loops=1)\n -> Nested Loop (cost=11.80..168.94 rows=1 width=56) (actual \ntime=93.739..13342.157 rows=12745 loops=1)\n -> Nested Loop (cost=11.80..168.62 rows=1 width=60) (actual \ntime=93.734..13227.265 rows=12745 loops=1)\n Join Filter: (COALESCE((domain_stats.date <= \ndomain_mappings.end_date), true) AND ((shares.year)::double precision \n= date_part('year'::text, (domain_stats.date)::timestamp without time \nzone)) AND ((shares.month)::double precision = date_part \n('month'::text, (domain_stats.date)::timestamp without time zone)))\n -> Nested Loop (cost=11.80..31.74 rows=1 width=48) \n(actual time=0.258..26.950 rows=6069 loops=1)\n Join Filter: ((domain_mappings.partner_id = 1) OR \n(hashed subplan))\n -> Nested Loop (cost=8.50..27.28 rows=1 \nwidth=32) (actual time=0.114..9.298 rows=567 loops=1)\n -> Hash Join (cost=8.50..25.11 rows=1 \nwidth=28) (actual time=0.092..1.864 rows=560 loops=1)\n Hash Cond: \n(((partners_shares.year)::double precision = (shares.year)::double \nprecision) AND ((partners_shares.month)::double precision = \n(shares.month)::double precision))\n -> Seq Scan on \nmonthly_shares_with_parents_materialized partners_shares \n(cost=0.00..9.60 rows=560 width=16) (actual time=0.009..0.336 rows=560 \nloops=1)\n -> Hash (cost=8.39..8.39 rows=7 \nwidth=12) (actual time=0.059..0.059 rows=7 loops=1)\n -> Bitmap Heap Scan on \nmonthly_shares_with_parents_materialized shares (cost=4.30..8.39 \nrows=7 width=12) (actual time=0.033..0.041 rows=7 loops=1)\n Recheck Cond: (partner_id \n= 1)\n -> Bitmap Index Scan on \nmonthly_shares_with_parents_materialized_pkey (cost=0.00..4.30 rows=7 \nwidth=0) (actual time=0.027..0.027 rows=7 loops=1)\n Index Cond: \n(partner_id = 1)\n -> Index Scan using \nmaterialized_top_subparents_pkey on materialized_top_subparents \nsubparents (cost=0.00..2.16 rows=1 width=8) (actual time=0.010..0.011 \nrows=1 loops=560)\n Index Cond: ((subparents.parent_id = \n1) AND (subparents.top_subparent = partners_shares.partner_id))\n -> Index Scan using \nix_domain_mappings_partner_id on domain_mappings (cost=0.00..0.97 \nrows=11 width=16) (actual time=0.004..0.012 rows=11 loops=567)\n Index Cond: (domain_mappings.partner_id = \nsubparents.partner_id)\n SubPlan\n -> Seq Scan on partners (cost=0.00..3.12 \nrows=71 width=4) (actual time=0.005..0.059 rows=71 loops=1)\n Filter: (parent_id = 1)\n -> Index Scan using uix_date_domain_country on \ndomain_stats (cost=0.00..136.65 rows=6 width=36) (actual \ntime=0.653..2.089 rows=15 loops=6069)\n Index Cond: ((domain_stats.date >= \n'2009-08-01'::date) AND (domain_stats.date < '2009-09-02'::date) AND \n(domain_stats.date >= domain_mappings.start_date) AND \n(domain_stats.domain_id = domain_mappings.domain_id))\n -> Index Scan using partners_pkey on partners \n(cost=0.00..0.31 rows=1 width=4) (actual time=0.006..0.007 rows=1 \nloops=12745)\n Index Cond: (public.partners.id = \ndomain_mappings.partner_id)\n -> Index Scan using domains_pkey on domains (cost=0.00..0.29 \nrows=1 width=46) (actual time=0.007..0.008 rows=1 loops=12745)\n Index Cond: (domains.id = domain_stats.domain_id)\n SubPlan\n -> Seq Scan on partners (cost=0.00..3.12 rows=1 width=3) \n(actual time=0.044..0.064 rows=1 loops=12745)\n Filter: (id = $0)\nTotal runtime: 14491.142 ms", "msg_date": "Fri, 18 Sep 2009 17:21:29 +0300", "msg_from": "Michael Korbakov <[email protected]>", "msg_from_op": true, "msg_subject": "Planner question - wrong row count estimation" } ]
[ { "msg_contents": "\n\nHello,\n\nI am trying to index a field in my database of size about 16K rows, but i m\ngetting this error.\n\n\" Index row requires 9324 bytes maximum size is 8191 \"\n\nCan anyone please guide me how to remove this error....\n\nAlso, average time to search for a query in a table is taking about 15\nseconds. I have done indexing but the time is not reducing.....\nIs there any way to reduce the time to less than 1 sec ???\nThe type of indexing which I am performing on the field is btree... My field\ncontains large text. Is there any more suitable indexing type ??\n\n-- \nView this message in context: http://www.nabble.com/Index-row-requires-9324-bytes-maximum-size-is-8191-tp25511356p25511356.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 18 Sep 2009 09:06:44 -0700 (PDT)", "msg_from": "solAris23 <[email protected]>", "msg_from_op": true, "msg_subject": "Index row requires 9324 bytes maximum size is 8191" }, { "msg_contents": "solAris23 escreveu:\n> I am trying to index a field in my database of size about 16K rows, but i m\n> getting this error.\n> \nWhy are you want to index such a big field? BTW, it'll be worthless.\n\n> \" Index row requires 9324 bytes maximum size is 8191 \"\n> \nThat is a known limitation; but even if it would be possible I don't think it\nwould be a good idea. Why on Earth would I search using a big field?\n\nWhat kind of content are you trying to index?\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Sat, 19 Sep 2009 23:58:43 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index row requires 9324 bytes maximum size is 8191" }, { "msg_contents": "Hello\n\n2009/9/18 solAris23 <[email protected]>:\n>\n>\n> Hello,\n>\n> I am trying to index a field in my database of size about 16K rows, but i m\n> getting this error.\n>\n> \" Index row requires 9324 bytes maximum size is 8191  \"\n>\n> Can anyone please guide me how to remove this error....\n>\n> Also, average time to search for a query in a table is taking about 15\n> seconds. I have done indexing but the time is not reducing.....\n> Is there any way to reduce the time to less than 1 sec ???\n> The type of indexing which I am performing on the field is btree... My field\n> contains large text. Is there any more suitable indexing type ??\n>\n\nyou can use hashing functions\n\nhttp://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Using_hash_functions_for_ensuring_uniqueness_of_texts\n\nregards\nPavel Stehule\n\n> --\n> View this message in context: http://www.nabble.com/Index-row-requires-9324-bytes-maximum-size-is-8191-tp25511356p25511356.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 20 Sep 2009 07:47:08 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index row requires 9324 bytes maximum size is 8191" }, { "msg_contents": "not only that's slow, but limited as you can see. Use something like:\nhttp://gjsql.wordpress.com/2009/04/19/how-to-speed-up-index-on-bytea-text-etc/\ninstead.\n", "msg_date": "Mon, 21 Sep 2009 09:38:38 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index row requires 9324 bytes maximum size is 8191" }, { "msg_contents": "* solAris:\n\n> Also, average time to search for a query in a table is taking about 15\n> seconds. I have done indexing but the time is not reducing.....\n> Is there any way to reduce the time to less than 1 sec ???\n\nHow are your queries structured? Do you just compare values? Do you\nperform range queries? Or something like \"WHERE col LIKE '%string%')?\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Mon, 21 Sep 2009 08:51:22 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index row requires 9324 bytes maximum size is 8191" }, { "msg_contents": "\n\n\nEuler Taveira de Oliveira-2 wrote:\n> \n> solAris23 escreveu:\n>> I am trying to index a field in my database of size about 16K rows, but i\n>> m\n>> getting this error.\n>> \n> Why are you want to index such a big field? BTW, it'll be worthless.\n> \n>> \" Index row requires 9324 bytes maximum size is 8191 \"\n>> \n> That is a known limitation; but even if it would be possible I don't think\n> it\n> would be a good idea. Why on Earth would I search using a big field?\n> \n> What kind of content are you trying to index?\n> \n> Thanks for the feed back... Actually I want to index text field which is\n> substantially big.\n> As I already told in the post.. the searching takes too long.. so I want\n> to index it....\n> \n> I went through gist and gin indexes but they are applicable for tsvector\n> and tsquery i think...\n> \n> For me any option is fine... if i am able to get result within a\n> second.... \n> \n> Thanks.\n> \n> \n> -- \n> Euler Taveira de Oliveira\n> http://www.timbira.com/\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Index-row-requires-9324-bytes-maximum-size-is-8191-tp25511356p25549641.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 24 Sep 2009 02:49:17 -0700 (PDT)", "msg_from": "solAris23 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index row requires 9324 bytes maximum size is 8191" } ]
[ { "msg_contents": "Hi everybody.\n\nI'm having an issues with wrong plan for query in PostgreSQL (version \n8.3). EXPLAIN ANALYZE shows that there're a lot of places where \nplanner estimates row count totally wrong, like 1 instead of 12000+.\ndefault_statistics_target variable is set to 100, and I tried to run \nVACUUM ANALYZE many times.\n\nBecause of wrong estimation query planner uses nested loops instead of \nhash joins and it results in very bad performance. Disabling nested \nloops helps, but I want to understand what happens there and try to \navoid it in future.\n\nCould you help me with it? Query and plan are below.\n\nThank you in advance,\n\nMichael Korbakov\n\nSELECT wide_stats.*, revenue * rev_share AS net_revenue, revenue * \n(rev_share - partner_rev_share) AS gross_revenue,\n (CASE clicks WHEN 0 THEN 0 ELSE revenue * rev_share / \nclicks END) as net_rpc, (CASE clicks WHEN 0 THEN 0 ELSE revenue * \n(rev_share - partner_rev_share) / clicks END) AS gross_rpc,\n rev_share * wide_stats.ecpm AS net_ecpm, (rev_share - \npartner_rev_share) * wide_stats.ecpm AS gross_ecpm,\n partner_rev_share * revenue AS partner_revenue\n FROM\n\n (SELECT stats.id, stats.date, stats.domain_id, (CASE WHEN \nstats.partner_id = 1 OR top_subparent = 1 THEN title ELSE real_title \nEND) AS title,\n stats.pageviews, stats.subsequent_searches, \nstats.searches, stats.clicks, stats.revenue, stats.country, \nstats.approved,\n stats.partner_id, shares.rev_share, (CASE top_subparent \nWHEN 1 THEN 0 ELSE partners_shares.rev_share END) AS \npartner_rev_share, stats.ctr, stats.rpc,\n (CASE stats.searches WHEN 0 THEN 0 ELSE 1000 * revenue / \nsearches END) AS ecpm,\n (SELECT name FROM partners WHERE id = top_subparent) AS \nowner,\n subparents.top_subparent\n FROM reports.daily_domain_reports AS stats\n LEFT JOIN materialized_top_subparents AS subparents ON \nstats.partner_id = subparents.partner_id AND subparents.parent_id = 1\n LEFT JOIN reports.monthly_shares_with_parents_materialized AS \nshares ON date_part('year'::text, stats.date) = shares.year AND \ndate_part('month'::text, stats.date) = shares.month AND \nshares.partner_id = 1\n LEFT JOIN reports.monthly_shares_with_parents_materialized AS \npartners_shares ON date_part('year'::text, stats.date) = \npartners_shares.year AND date_part('month'::text, stats.date) = \npartners_shares.month AND partners_shares.partner_id = top_subparent\n WHERE stats.partner_id = 1 OR top_subparent IN (SELECT partners.id \nFROM partners WHERE parent_id = 1)\n\n) AS wide_stats WHERE date >= '2009-08-01' AND date < '2009-09-02';\n\n\nNested Loop (cost=11.80..172.48 rows=1 width=94) (actual \ntime=93.792..14485.092 rows=12745 loops=1)\n -> Nested Loop (cost=11.80..168.94 rows=1 width=56) (actual \ntime=93.739..13342.157 rows=12745 loops=1)\n -> Nested Loop (cost=11.80..168.62 rows=1 width=60) (actual \ntime=93.734..13227.265 rows=12745 loops=1)\n Join Filter: (COALESCE((domain_stats.date <= \ndomain_mappings.end_date), true) AND ((shares.year)::double precision \n= date_part('year'::text, (domain_stats.date)::timestamp without time \nzone)) AND ((shares.month)::double precision = date_part \n('month'::text, (domain_stats.date)::timestamp without time zone)))\n -> Nested Loop (cost=11.80..31.74 rows=1 width=48) \n(actual time=0.258..26.950 rows=6069 loops=1)\n Join Filter: ((domain_mappings.partner_id = 1) OR \n(hashed subplan))\n -> Nested Loop (cost=8.50..27.28 rows=1 \nwidth=32) (actual time=0.114..9.298 rows=567 loops=1)\n -> Hash Join (cost=8.50..25.11 rows=1 \nwidth=28) (actual time=0.092..1.864 rows=560 loops=1)\n Hash Cond: \n(((partners_shares.year)::double precision = (shares.year)::double \nprecision) AND ((partners_shares.month)::double precision = \n(shares.month)::double precision))\n -> Seq Scan on \nmonthly_shares_with_parents_materialized partners_shares \n(cost=0.00..9.60 rows=560 width=16) (actual time=0.009..0.336 rows=560 \nloops=1)\n -> Hash (cost=8.39..8.39 rows=7 \nwidth=12) (actual time=0.059..0.059 rows=7 loops=1)\n -> Bitmap Heap Scan on \nmonthly_shares_with_parents_materialized shares (cost=4.30..8.39 \nrows=7 width=12) (actual time=0.033..0.041 rows=7 loops=1)\n Recheck Cond: (partner_id \n= 1)\n -> Bitmap Index Scan on \nmonthly_shares_with_parents_materialized_pkey (cost=0.00..4.30 rows=7 \nwidth=0) (actual time=0.027..0.027 rows=7 loops=1)\n Index Cond: \n(partner_id = 1)\n -> Index Scan using \nmaterialized_top_subparents_pkey on materialized_top_subparents \nsubparents (cost=0.00..2.16 rows=1 width=8) (actual time=0.010..0.011 \nrows=1 loops=560)\n Index Cond: ((subparents.parent_id = \n1) AND (subparents.top_subparent = partners_shares.partner_id))\n -> Index Scan using ix_domain_mappings_partner_id \non domain_mappings (cost=0.00..0.97 rows=11 width=16) (actual \ntime=0.004..0.012 rows=11 loops=567)\n Index Cond: (domain_mappings.partner_id = \nsubparents.partner_id)\n SubPlan\n -> Seq Scan on partners (cost=0.00..3.12 \nrows=71 width=4) (actual time=0.005..0.059 rows=71 loops=1)\n Filter: (parent_id = 1)\n -> Index Scan using uix_date_domain_country on \ndomain_stats (cost=0.00..136.65 rows=6 width=36) (actual \ntime=0.653..2.089 rows=15 loops=6069)\n Index Cond: ((domain_stats.date >= \n'2009-08-01'::date) AND (domain_stats.date < '2009-09-02'::date) AND \n(domain_stats.date >= domain_mappings.start_date) AND \n(domain_stats.domain_id = domain_mappings.domain_id))\n -> Index Scan using partners_pkey on partners \n(cost=0.00..0.31 rows=1 width=4) (actual time=0.006..0.007 rows=1 \nloops=12745)\n Index Cond: (public.partners.id = \ndomain_mappings.partner_id)\n -> Index Scan using domains_pkey on domains (cost=0.00..0.29 \nrows=1 width=46) (actual time=0.007..0.008 rows=1 loops=12745)\n Index Cond: (domains.id = domain_stats.domain_id)\n SubPlan\n -> Seq Scan on partners (cost=0.00..3.12 rows=1 width=3) (actual \ntime=0.044..0.064 rows=1 loops=12745)\n Filter: (id = $0)\nTotal runtime: 14491.142 ms\n\n", "msg_date": "Sun, 20 Sep 2009 03:08:27 +0300", "msg_from": "Michael Korbakov <[email protected]>", "msg_from_op": true, "msg_subject": "Planner question - wrong row count estimation" }, { "msg_contents": "On 9/19/09 5:08 PM, Michael Korbakov wrote:\n> -> Hash Join (cost=8.50..25.11 rows=1\n> width=28) (actual time=0.092..1.864 rows=560 loops=1)\n> Hash Cond:\n> (((partners_shares.year)::double precision = (shares.year)::double\n> precision) AND ((partners_shares.month)::double precision =\n> (shares.month)::double precision))\n\nThis appears to be where the estimates go wrong; Postgres may be\nassuming random correlation which isn't correct.\n\nMy suggestion would be to try and create matching indexes on\ndate_trunc(daily_domain_reports.date) and month & year of\nmonthly_shares_with_parents_materialized.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Mon, 21 Sep 2009 14:20:33 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner question - wrong row count estimation" } ]
[ { "msg_contents": "Hey folks,\n\nWe are looking to optimize the query I was talking about last week\nwhich is killing our system.\n\nWe have explain and analyze which tell us about the cost of a query\ntime-wise, but what does one use to determine (and trace / predict?)\nmemory consumption?\n\nthanks,\n-Alan\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Mon, 21 Sep 2009 10:47:46 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "query memory consumption" }, { "msg_contents": "On Mon, Sep 21, 2009 at 10:47 AM, Alan McKay <[email protected]> wrote:\n> We are looking to optimize the query I was talking about last week\n> which is killing our system.\n>\n> We have explain and analyze which tell us about the cost of a query\n> time-wise, but what does one use to determine (and trace / predict?)\n> memory consumption?\n\nI'm not sure what to suggest, other than the available operating\nsystem tools, but if you post EXPLAIN ANALYZE output we might be able\nto speculate better.\n\nSetting work_mem too high is a frequent cause of problems of this sort, I think.\n\n...Robert\n", "msg_date": "Mon, 21 Sep 2009 16:08:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "On Mon, 21 Sep 2009, Alan McKay wrote:\n> We have explain and analyze which tell us about the cost of a query\n> time-wise, but what does one use to determine (and trace / predict?)\n> memory consumption?\n\nIn Postgres, memory consumption for all operations is generally capped at \nthe value of work_mem. However, a given query can consist of more than one \noperation. Generally, only heavy things like sorts and hashes consume \nwork_mem, so it should be possible to look at the explain to count those, \nmultiply by work_mem, and get the maximum amount of RAM that the query can \nuse.\n\nHowever, sometimes a query will not fit neatly into work_mem. At this \npoint, Postgres will write the data to temporary files on disc. It is \nharder to predict what size those will be. However, EXPLAIN ANALYSE will \nsometimes give you a figure of how big a sort was for example.\n\nMatthew\n\n-- \n Reality is that which, when you stop believing in it, doesn't go away.\n -- Philip K. Dick\n", "msg_date": "Tue, 22 Sep 2009 11:58:27 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "On Mon, Sep 21, 2009 at 4:08 PM, Robert Haas <[email protected]> wrote:\n> Setting work_mem too high is a frequent cause of problems of this sort, I think.\n\nToo high? How high is too high?\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Tue, 22 Sep 2009 08:36:51 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "On Tue, Sep 22, 2009 at 1:36 PM, Alan McKay <[email protected]> wrote:\n\n> Too high?  How high is too high?\n\nin a very simple scenario, you have 100 connections opened, and all of\nthem run the query that was the reason you bumped work_mem to 256M.\nAll of the sudden postgresql starts to complain about lack of ram,\nbecause you told it it could use max of\nwork_mem*number_of_connections.\n\nBest practice to avoid that, is to bump the work_mem temporarily\nbefore the query, and than lower it again, lowers the chance of memory\nexhaustion.\n\n\n-- \nGJ\n", "msg_date": "Tue, 22 Sep 2009 13:41:25 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "> Best practice to avoid that, is to bump the work_mem temporarily\n> before the query, and than lower it again, lowers the chance of memory\n> exhaustion.\n\nInteresting - I can do that dynamically?\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Tue, 22 Sep 2009 08:46:04 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay <[email protected]> wrote:\n>> Best practice to avoid that, is to bump the work_mem temporarily\n>> before the query, and than lower it again, lowers the chance of memory\n>> exhaustion.\n>\n> Interesting - I can do that dynamically?\n\nyou can do set work_mem=128M; select 1; set work_mem=64M;\n\netc, in one query.\n\n\n\n-- \nGJ\n", "msg_date": "Tue, 22 Sep 2009 13:51:03 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "2009/9/22 Grzegorz Jaśkiewicz <[email protected]>:\n> On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay <[email protected]> wrote:\n>>> Best practice to avoid that, is to bump the work_mem temporarily\n>>> before the query, and than lower it again, lowers the chance of memory\n>>> exhaustion.\n>>\n>> Interesting - I can do that dynamically?\n>\n> you can do set work_mem=128M; select 1; set work_mem=64M;\n>\n> etc, in one query.\n\nBut if all backends are running this one query at the same time, it\nwon't help because they will all bump up their limits at the same\ntime. If they are all running different queries, and just one of them\nreally gets a big benefit from the extra memory, but the rest just use\nit because they think they have it even though it is only a small\nbenefit, then bumping up just for the query that gets a big\nimprovement could work.\n\nJeff\n", "msg_date": "Fri, 25 Sep 2009 20:06:42 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query memory consumption" }, { "msg_contents": "2009/9/25 Jeff Janes <[email protected]>:\n> 2009/9/22 Grzegorz Jaśkiewicz <[email protected]>:\n>> On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay <[email protected]> wrote:\n>>>> Best practice to avoid that, is to bump the work_mem temporarily\n>>>> before the query, and than lower it again, lowers the chance of memory\n>>>> exhaustion.\n>>>\n>>> Interesting - I can do that dynamically?\n>>\n>> you can do set work_mem=128M; select 1; set work_mem=64M;\n>>\n>> etc, in one query.\n>\n> But if all backends are running this one query at the same time, it\n> won't help because they will all bump up their limits at the same\n> time.  If they are all running different queries, and just one of them\n> really gets a big benefit from the extra memory, but the rest just use\n> it because they think they have it even though it is only a small\n> benefit, then bumping up just for the query that gets a big\n> improvement could work.\n\nThis is, I think, a possible area for future optimizer work, but the\nright design is far from clear.\n\n...Robert\n", "msg_date": "Sun, 27 Sep 2009 14:40:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query memory consumption" } ]
[ { "msg_contents": "I'm looking at running session servers in ram. All the data is\nthrow-away data, so my plan is to have a copy of the empty db on the\nhard drive ready to go, and have a script that just copies it into ram\nand starts the db there. We're currently IO write bound with\nfsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\nthe db into /dev/shm will help quite a bit here.\n\nDoes anybody any real world experience here or any words of sage\nadvice before I go off and start testing this?\n", "msg_date": "Mon, 21 Sep 2009 17:39:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "session servers in ram" }, { "msg_contents": "On Mon, Sep 21, 2009 at 5:39 PM, Scott Marlowe <[email protected]>wrote:\n\n> I'm looking at running session servers in ram. All the data is\n> throw-away data, so my plan is to have a copy of the empty db on the\n> hard drive ready to go, and have a script that just copies it into ram\n> and starts the db there. We're currently IO write bound with\n> fsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\n> the db into /dev/shm will help quite a bit here.\n>\n> Does anybody any real world experience here or any words of sage\n> advice before I go off and start testing this?\n>\n>\nI assume you intend to this or some variation of it.\n mount -t tmpfs -o size=1G tmpfs /pgfast\n\ntmpfs file systems, including /dev/shm, ***should not*** be counted on as\nbeing RAM only. File systems of this type in Linux, and at least Solaris\nalso, can be swap since the tmpfs type is derived from swap and swap =\nmemory + active swap partitions.\n\nI would think that a ram disk or ramfs may be more what you're after. Add\nramdisk_size=X, where X is the maximum size in kilobytes, to the kernel line\nin your grub.conf file. Unlike tmpfs, the ramdisk_size X parameter cannot\nbe more than the memory of your server. When using a ramdisk the ext2 file\nsystem would be best with all the fun noatime and like mount parameters.\nThis is good for a fast, volatile, fixed-size ramdisk.\n\nOTOH, ramfs can be just as fast but with all the fun system-hanging features\nlike growing until all RAM is consumed.\n mount -t ramfs ramfs /pgfast\n\nPro ... it will always be as fast, or possibly a hair faster, than tmpfs.\n\nCon ... it won't show up in a df output unlike tmpfs or ramdisk. Use the\nmount command with no parameters to look for it and be sure to unmount it\nwhen you're done.\n\nPro/Con ... you can't specify the file system type like with ramdisk.\n\nPro... it will only take away from memory as space is used, i.e. if you have\n500M of memory in use and mount the file system but do nothing else then\nonly 500M of memory is in use. If you then copy a 100M file to it then 600M\nof memory is in use. Delete that 100M file and you're back to 500M of used\nmemory.\n\nPro/Con ... unlike other file systems, it will grow with the need...\nunchecked. It could attempt to consume all available memory pushing all\nother processes out to swap and this is a bad, bad thing.\n\n\nI'm sure there are other pro's & con's to ramfs.\n\nHTH.\n\nGreg\n\nOn Mon, Sep 21, 2009 at 5:39 PM, Scott Marlowe <[email protected]> wrote:\nI'm looking at running session servers in ram.  All the data is\nthrow-away data, so my plan is to have a copy of the empty db on the\nhard drive ready to go, and have a script that just copies it into ram\nand starts the db there.  We're currently IO write bound with\nfsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\nthe db into /dev/shm will help quite a bit here.\n\nDoes anybody any real world experience here or any words of sage\nadvice before I go off and start testing this?\nI assume you intend to this or some variation of it.  mount -t tmpfs -o size=1G tmpfs /pgfasttmpfs file systems, including /dev/shm, ***should not*** be counted on as being RAM only.  File systems of this type in Linux, and at least Solaris also, can be swap since the tmpfs type is derived from swap and swap = memory + active swap partitions.\nI would think that a ram disk or ramfs may be more what you're after.  Add ramdisk_size=X, where X is the maximum size in kilobytes, to the kernel line in your grub.conf file.  Unlike tmpfs, the ramdisk_size X parameter cannot be more than the memory of your server.  When using a ramdisk the ext2 file system would be best with all the fun noatime and like mount parameters.  This is good for a fast, volatile, fixed-size ramdisk.\nOTOH, ramfs can be just as fast but with all the fun system-hanging features like growing until all RAM is consumed.  mount -t ramfs ramfs /pgfastPro ... it will always be as fast, or possibly a hair faster, than tmpfs.\nCon ... it won't show up in a df output unlike tmpfs or ramdisk.  Use the mount command with no parameters to look for it and be sure to unmount it when you're done.Pro/Con ... you can't specify the file system type like with ramdisk.\nPro... it will only take away from memory as space is used, i.e. if you have 500M of memory in use and mount the file system but do nothing else then only 500M of memory is in use.  If you then copy a 100M file to it then 600M of memory is in use.  Delete that 100M file and you're back to 500M of used memory.\nPro/Con ... unlike other file systems, it will grow with the need... unchecked.  It could attempt to consume all available memory pushing all other processes out to swap and this is a bad, bad thing.I'm sure there are other pro's & con's to ramfs.\nHTH.Greg", "msg_date": "Tue, 22 Sep 2009 06:48:57 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: session servers in ram" }, { "msg_contents": "Hello\n\nthis is maybe off topic. Do you know memcached? We use it without\npostgresql six or seven months for short-live data with big success.\n\nregards\nPavel Stehule\n\n2009/9/22 Greg Spiegelberg <[email protected]>:\n> On Mon, Sep 21, 2009 at 5:39 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> I'm looking at running session servers in ram.  All the data is\n>> throw-away data, so my plan is to have a copy of the empty db on the\n>> hard drive ready to go, and have a script that just copies it into ram\n>> and starts the db there.  We're currently IO write bound with\n>> fsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\n>> the db into /dev/shm will help quite a bit here.\n>>\n>> Does anybody any real world experience here or any words of sage\n>> advice before I go off and start testing this?\n>>\n>\n> I assume you intend to this or some variation of it.\n>   mount -t tmpfs -o size=1G tmpfs /pgfast\n>\n> tmpfs file systems, including /dev/shm, ***should not*** be counted on as\n> being RAM only.  File systems of this type in Linux, and at least Solaris\n> also, can be swap since the tmpfs type is derived from swap and swap =\n> memory + active swap partitions.\n>\n> I would think that a ram disk or ramfs may be more what you're after.  Add\n> ramdisk_size=X, where X is the maximum size in kilobytes, to the kernel line\n> in your grub.conf file.  Unlike tmpfs, the ramdisk_size X parameter cannot\n> be more than the memory of your server.  When using a ramdisk the ext2 file\n> system would be best with all the fun noatime and like mount parameters.\n> This is good for a fast, volatile, fixed-size ramdisk.\n>\n> OTOH, ramfs can be just as fast but with all the fun system-hanging features\n> like growing until all RAM is consumed.\n>   mount -t ramfs ramfs /pgfast\n>\n> Pro ... it will always be as fast, or possibly a hair faster, than tmpfs.\n>\n> Con ... it won't show up in a df output unlike tmpfs or ramdisk.  Use the\n> mount command with no parameters to look for it and be sure to unmount it\n> when you're done.\n>\n> Pro/Con ... you can't specify the file system type like with ramdisk.\n>\n> Pro... it will only take away from memory as space is used, i.e. if you have\n> 500M of memory in use and mount the file system but do nothing else then\n> only 500M of memory is in use.  If you then copy a 100M file to it then 600M\n> of memory is in use.  Delete that 100M file and you're back to 500M of used\n> memory.\n>\n> Pro/Con ... unlike other file systems, it will grow with the need...\n> unchecked.  It could attempt to consume all available memory pushing all\n> other processes out to swap and this is a bad, bad thing.\n>\n>\n> I'm sure there are other pro's & con's to ramfs.\n>\n> HTH.\n>\n> Greg\n>\n", "msg_date": "Tue, 22 Sep 2009 15:05:43 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: session servers in ram" }, { "msg_contents": "* Scott Marlowe <[email protected]> [090921 19:39]:\n> I'm looking at running session servers in ram. All the data is\n> throw-away data, so my plan is to have a copy of the empty db on the\n> hard drive ready to go, and have a script that just copies it into ram\n> and starts the db there. We're currently IO write bound with\n> fsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\n> the db into /dev/shm will help quite a bit here.\n> \n> Does anybody any real world experience here or any words of sage\n> advice before I go off and start testing this?\n\n*If* fsync=off is really meaning that there are no sync commands\nhappening on your pg partitions (and nothing else, like syslog, is\ncausing syncs on them), and you're kernel is tuned to allow the maximum\ndirty buffers/life, then I'm not sure that's going to gain you\nanything... If your pg processes are blocked writing, with no syncs,\nthen they are blocked because the kernel has no more buffers available\nfor buffering the writes...\n\nMoving your backing store from a disk-based FS to disk-based swap is only\ngoing to shift the route of being forced to hit the disk...\n\nOf course, details matter, and results trump theory, so test it ;-)\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Tue, 22 Sep 2009 09:16:44 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: session servers in ram" }, { "msg_contents": "On Monday 21 September 2009, Scott Marlowe <[email protected]> wrote:\n> I'm looking at running session servers in ram.\n> Does anybody any real world experience here or any words of sage\n> advice before I go off and start testing this?\n\nUse memcached for session data.\n\n-- \n\"No animals were harmed in the recording of this episode. We tried but that \ndamn monkey was just too fast.\"\n", "msg_date": "Tue, 22 Sep 2009 09:04:32 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: session servers in ram" }, { "msg_contents": "Alan Hodgson wrote:\n> On Monday 21 September 2009, Scott Marlowe <[email protected]> wrote:\n>> I'm looking at running session servers in ram.\n> \n> Use memcached for session data.\n\nIMHO postgres is more appropriate for some types of session data.\n\nOne of the apps I work on involves session data that consists of\ngeospatial data which we store and index in postgres/postgis.\n\nScott Marlowe wrote:\n> I'm looking at running session servers in ram.\n> We're currently IO write bound with\n> fsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\n> the db into /dev/shm will help quite a bit here.\n\n\"a 15k5 seagate SAS drive\"\n\nIs this implying that you have \"a\" == one session server? I\nbet that it'd be cheaper to throw a bunch of cheap boxes\nin there and make a pool of session servers rather than one\nfast one. When a new session is created, your application\ncode can then pick the least loaded session server and put\nthe session-server-number in a cookie.\n\nThis approach works fine for me - but I suspect I have many\nfewer, yet probably much larger sessions going through the\nsystem.\n\n\n\n\n> Does anybody any real world experience here or any words of sage\n> advice before I go off and start testing this?\n> \n\n", "msg_date": "Tue, 22 Sep 2009 11:01:43 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: session servers in ram" }, { "msg_contents": "On Tue, Sep 22, 2009 at 12:01 PM, Ron Mayer\n<[email protected]> wrote:\n> Alan Hodgson wrote:\n>> On Monday 21 September 2009, Scott Marlowe <[email protected]> wrote:\n>>> I'm looking at running session servers in ram.\n>>\n>> Use memcached for session data.\n>\n> IMHO postgres is more appropriate for some types of session data.\n>\n> One of the apps I work on involves session data that consists of\n> geospatial data which we store and index in postgres/postgis.\n>\n> Scott Marlowe wrote:\n>> I'm looking at running session servers in ram.\n>>  We're currently IO write bound with\n>> fsync=off using a 15k5 seagate SAS drive, so I'm hoping that moving\n>> the db into /dev/shm will help quite a bit here.\n>\n> \"a 15k5 seagate SAS drive\"\n>\n> Is this implying that you have \"a\" == one session server?  I\n> bet that it'd be cheaper to throw a bunch of cheap boxes\n> in there and make a pool of session servers rather than one\n> fast one.   When a new session is created, your application\n> code can then pick the least loaded session server and put\n> the session-server-number in a cookie.\n\nWe already have two using modulus load balancing, and each is handling\nup to 100,000 sessons each, and an average session object of 10k to\n20k. I'm just looking at how to keep from throwing more cheap boxes\nat it, or having to put more drives in them. We're mostly IO bound on\nthese machines, even with 100 checkpoint segments and a 30 minute\ncheckpoint timeout and a low completion target to reduce checkpointing\neven more.\n\nEven with a move to a ramdisk, I'm guessing with our increasing load\nwe're gonna need to double our session servers eventually.\n\nAs for memcached (mentioned in another post), I'm not sure if it's the\nright fit for this or not. We already use it to cache app data and it\nworks well enough, so it's worth testing for this as well I guess.\n\nThanks for all the input from everybody, I'll let you know how it works out.\n", "msg_date": "Tue, 22 Sep 2009 12:22:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: session servers in ram" } ]
[ { "msg_contents": "Dear all\n\n I am having a problem of high cpu loads in my postgres server during peak\ntime. Following are the\ndetails of my setup (details as per the postgres wiki) .\n\n** PostgreSQL version\n o Run \"select pg_version();\" in psql or PgAdmin III and provide the\nfull, exact output.*\n\n\nclusternode2:~ # rpm -qa | grep postgres\npostgresql-devel-8.1.9-1.2\npostgresql-8.1.9-1.2\npostgresql-docs-8.1.9-1.2\npostgresql-server-8.1.9-1.2\npostgresql-libs-64bit-8.1.9-1.2\npostgresql-libs-8.1.9-1.2\npostgresql-jdbc-8.1-12.2\npostgresql-contrib-8.1.9-1.2\n\n\n* *A description of what you are trying to achieve and what results you\nexpect.*\n\nTo keep the CPU Load below 10 , Now during peak times the load is nearing to\n40\nAt that time , it is not possible to access the data.\n\n ** The EXACT text of the query you ran, if any\n\n\n * The EXACT output of that query if it's short enough to be reasonable to\npost\n o If you think the output is wrong, what you think should've been\nproduced instead\n\n * The EXACT error message you get, if there is one*\n\nAs of now , i am unable to locate the exact query, the load shoots up\nabnormally during\npeak time is the main problem .\n\n\n* * What program you're using to connect to PostgreSQL*\n\n Jakarta Tomcat - Struts with JSP\n\n\n ** What version of the ODBC/JDBC driver you're using, if any*\n\npostgresql-jdbc-8.1-12.2\n\n * *What you were doing when the error happened / how to cause the error.\nDescribe in as much detail as possible, step by step, including command\nlines, SQL output, etc.*\n\nWhen certain tables with more than 3 lakh items are concurrently accessed by\nmore than 300\nusers, the CPU load shoots up .\n\n* * Is there anything remotely unusual in the PostgreSQL server logs?\n o On Windows these are in your data directory. On a default\nPostgreSQL install that'll be in C:\\Program Files\\PostgreSQL\\8.4\\data\\pg_log\n(assuming you're using 8.4)\n*\nThe log file /var/log/postgresql has no data .\n\n * o On Linux this depends a bit on distro, but you'll usually find\nthem in /var/log/postgresql/.\n * Operating system and version\n o Linux users:\n + Linux distro and version\n + Kernel details (run \"uname -a\" on the terminal) *\n\nSLES 10 SP3\nclusternode2:~ # uname -a\nLinux clusternode2 2.6.16.46-0.12-ppc64 #1 SMP Thu May 17 14:00:09 UTC 2007\nppc64 ppc64 ppc64 GNU/Linux\n\n\n *\n * What kind of hardware you have.\n o CPU manufacturer and model, eg \"AMD Athlon X2\" or \"Intel Core 2\nDuo\"\n o Amount and size of RAM installed, eg \"2GB RAM\"\n*\nHigh Availability Cluster with two IBM P Series Server and one DS4700\nStorage\n\nIBM P series P52A with 2-core 2.1 Ghz POWER5+ Processor Card , 36 MB L3\nCache ,16 GB of RAM,\n73.4 GB 10,000 RPM Ultra320 SCSI Drive for Operating System .\n\n\n\n * o Storage details (important for performance and corruption\nquestions)\n + Do you use a RAID controller? If so, what type of\ncontroller? eg \"3Ware Escalade 8500-8\"\n # Does it have a battery backed cache module?\n # Is write-back caching enabled?\n + Do you use software RAID? If so, what software and what\nversion? eg \"Linux software RAID (md) 2.6.18-5-686 SMP mod_unload 686\nREGPARM gcc-4.1\".\n # In the case of Linux software RAID you can get the\ndetails from the \"modinfo md_mod\" command\n + Is your PostgreSQL database on a SAN?\n # Who made it, what kind, etc? Provide what details you\ncan.\n + How many hard disks are connected to the system and what\ntypes are they? You need to say more than just \"6 disks\". At least give\nmaker, rotational speed and interface type, eg \"6 15,000rpm Seagate SAS\ndisks\".\n + How are your disks arranged for storage? Are you using\nRAID? If so, what RAID level(s)? What PostgreSQL data is on what disks /\ndisk sets? What file system(s) are in use?\n # eg: \"Two disks in RAID 1, with all PostgreSQL data\nand programs stored on one ext3 file system.\"\n # eg: \"4 disks in RAID 5 holding the pg data directory\non an ext3 file system. 2 disks in RAID 1 holding pg_clog, pg_xlog, the\ntemporary tablespace, and the sort scratch space, also on ext3.\".\n # eg: \"Default Windows install of PostgreSQL\"\n + In case of corruption data reports:\n # Have you had any unexpected power loss lately?\n # Have you run a file system check? (chkdsk / fsck)\n # Are there any error messages in the system logs?\n(unix/linux: \"dmesg\", \"/var/log/syslog\" ; Windows: Event Viewer in Control\nPanel -> Administrative Tools ) *\n\n\nIBM SAN DS4700 Storage with Fibre Channel HDD (73.4 GB * 10)\nTwo Partitions - 73.4 GB * 3 RAID 5 - 134 GB storage partitions (One holding\nJakarata tomcat\napplication server and other holding Postgresql Database) .\nFour Hard disk RAID 5 with ext3 file systems hold the pgdata on SAN .\nHard disk rotational speed is 73 GB 15K IBM 2 GB Fibre channel\n\nNo power loss, filesystem check also fine, No errors on /var/log/syslog\n\n*Following is the output of TOP command during offpeak time.*\n\n\ntop - 18:36:56 up 77 days, 20:33, 1 user, load average: 12.99, 9.22, 10.37\nTasks: 142 total, 12 running, 130 sleeping, 0 stopped, 0 zombie\nCpu(s): 46.1%us, 1.9%sy, 0.0%ni, 6.1%id, 3.0%wa, 0.0%hi, 0.1%si,\n42.9%st\nMem: 16133676k total, 13657396k used, 2476280k free, 450908k buffers\nSwap: 14466492k total, 124k used, 14466368k free, 11590056k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n\n22458 postgres 19 0 2473m 477m 445m R 40 3.0 0:15.49 postmaster\n\n22451 postgres 15 0 2442m 447m 437m S 33 2.8 0:30.44 postmaster\n\n22464 postgres 17 0 2443m 397m 383m R 28 2.5 0:13.78 postmaster\n\n22484 postgres 16 0 2448m 431m 412m S 20 2.7 0:02.73 postmaster\n\n22465 postgres 17 0 2440m 461m 449m R 15 2.9 0:03.52 postmaster\n\n22452 postgres 16 0 2450m 727m 706m R 13 4.6 0:23.46 postmaster\n\n22476 postgres 16 0 2437m 413m 405m S 13 2.6 0:06.11 postmaster\n\n22485 postgres 16 0 2439m 230m 222m R 7 1.5 0:05.72 postmaster\n\n22481 postgres 15 0 2436m 175m 169m S 7 1.1 0:04.44 postmaster\n\n22435 postgres 17 0 2438m 371m 361m R 6 2.4 1:17.92 postmaster\n\n22440 postgres 17 0 2445m 497m 483m R 5 3.2 1:44.50 postmaster\n\n22486 postgres 17 0 2432m 84m 81m R 4 0.5 0:00.76 postmaster\n\n 3 root 34 19 0 0 0 R 0 0.0 1:47.50 ksoftirqd/0\n\n4726 root 15 0 29540 8776 3428 S 0 0.1 140:02.98 X\n\n24950 root 15 0 0 0 0 S 0 0.0 0:30.96 pdflush\n\n 1 root 16 0 812 316 280 S 0 0.0 0:13.29 init\n\n 2 root RT 0 0 0 0 S 0 0.0 0:01.46 migration/0\n\n 4 root RT 0 0 0 0 S 0 0.0 0:00.78 migration/1\n\n 5 root 34 19 0 0 0 S 0 0.0 1:36.79 ksoftirqd/1\n\n 6 root RT 0 0 0 0 S 0 0.0 0:01.46 migration/2\n\n 7 root 34 19 0 0 0 R 0 0.0 1:49.83 ksoftirqd/2\n\n 8 root RT 0 0 0 0 S 0 0.0 0:00.79 migration/3\n\n 9 root 34 19 0 0 0 S 0 0.0 1:38.18 ksoftirqd/3\n\n 10 root 10 -5 0 0 0 S 0 0.0 1:02.11 events/0\n\n 11 root 10 -5 0 0 0 S 0 0.0 1:03.27 events/1\n\n 12 root 10 -5 0 0 0 S 0 0.0 1:01.76 events/2\n\n 13 root 10 -5 0 0 0 S 0 0.0 1:02.29 events/3\n\n 14 root 10 -5 0 0 0 S 0 0.0 0:00.01 khelper\n\n1016 root 10 -5 0 0 0 S 0 0.0 0:00.00 kthread\n\n1054 root 10 -5 0 0 0 S 0 0.0 0:03.08 kblockd/0\n\n1055 root 10 -5 0 0 0 S 0 0.0 0:02.83 kblockd/1\n\n1056 root 10 -5 0 0 0 S 0 0.0 0:03.19 kblockd/2\n\n\n\n\nThe CPU Load shoots upto 40 during peak time.\n*\nFollowing is my postgresql.conf (without comments) *\n\nhba_file = '/var/lib/pgsql/data/pg_hba.conf'\nlisten_addresses = '*'\nport = 5432\nmax_connections = 1800\nshared_buffers = 300000\nmax_fsm_relations = 1000\neffective_cache_size = 200000\nlog_destination = 'stderr'\nredirect_stderr = on\nlog_rotation_age = 0\nlog_rotation_size = 10240\nsilent_mode = onlog_line_prefix = '%t %d %u '\nautovacuum = on\ndatestyle = 'iso, dmy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\n\n*User Access*\nTotal Number of Users is 500\nMaximum number of Concurrent users will be 500 during peak time\nOff Peak time the maximum number of concurrent user will be around 150 to\n200.\n\n\nPlease let me know your suggestions to improve the performance.\n\nRegards\n\nShiva Raman\n\nDear all   I am having a problem of high cpu loads in my postgres server during peak time. Following are the details of my setup (details as per the postgres wiki) .* PostgreSQL version          o Run \"select pg_version();\" in psql or PgAdmin III and provide the full, exact output.\nclusternode2:~ # rpm -qa | grep postgrespostgresql-devel-8.1.9-1.2postgresql-8.1.9-1.2postgresql-docs-8.1.9-1.2postgresql-server-8.1.9-1.2postgresql-libs-64bit-8.1.9-1.2postgresql-libs-8.1.9-1.2\npostgresql-jdbc-8.1-12.2postgresql-contrib-8.1.9-1.2 * A description of what you are trying to achieve and what results you expect.To keep the CPU Load below 10 , Now during peak times the load is nearing to 40 \nAt that time , it is not possible to access the data.    * The EXACT text of the query you ran, if any    * The EXACT output of that query if it's short enough to be reasonable to post          o If you think the output is wrong, what you think should've been produced instead \n    * The EXACT error message you get, if there is oneAs of now , i am unable to locate the exact query, the load shoots up abnormally duringpeak time is the main problem .    * What program you're using to connect to PostgreSQL\n         Jakarta Tomcat - Struts with JSP    * What version of the ODBC/JDBC driver you're using, if any         postgresql-jdbc-8.1-12.2        * What you were doing when the error happened / how to cause the error. Describe in as much detail as possible, step by step, including command lines, SQL output, etc.\nWhen certain tables with more than 3 lakh items are concurrently accessed by more than 300users, the CPU load shoots up .    * Is there anything remotely unusual in the PostgreSQL server logs?          o On Windows these are in your data directory. On a default PostgreSQL install that'll be in C:\\Program Files\\PostgreSQL\\8.4\\data\\pg_log (assuming you're using 8.4)\nThe log file /var/log/postgresql has no data .          o On Linux this depends a bit on distro, but you'll usually find them in /var/log/postgresql/.    * Operating system and version          o Linux users:\n                + Linux distro and version                + Kernel details (run \"uname -a\" on the terminal) SLES 10 SP3 clusternode2:~ # uname -aLinux clusternode2 2.6.16.46-0.12-ppc64 #1 SMP Thu May 17 14:00:09 UTC 2007 ppc64 ppc64 ppc64 GNU/Linux\n             * What kind of hardware you have.          o CPU manufacturer and model, eg \"AMD Athlon X2\" or \"Intel Core 2 Duo\"          o Amount and size of RAM installed, eg \"2GB RAM\"\nHigh Availability Cluster with two IBM P Series Server and one DS4700 StorageIBM P series P52A with 2-core 2.1 Ghz POWER5+ Processor Card , 36 MB L3 Cache ,16 GB of RAM,73.4 GB 10,000 RPM Ultra320 SCSI Drive for Operating System .  \n          o Storage details (important for performance and corruption questions)                + Do you use a RAID controller? If so, what type of controller? eg \"3Ware Escalade 8500-8\"                      # Does it have a battery backed cache module?\n                      # Is write-back caching enabled?                + Do you use software RAID? If so, what software and what version? eg \"Linux software RAID (md) 2.6.18-5-686 SMP mod_unload 686 REGPARM gcc-4.1\".\n                      # In the case of Linux software RAID you can get the details from the \"modinfo md_mod\" command                + Is your PostgreSQL database on a SAN?                      # Who made it, what kind, etc? Provide what details you can. \n                + How many hard disks are connected to the system and what types are they? You need to say more than just \"6 disks\". At least give maker, rotational speed and interface type, eg \"6 15,000rpm Seagate SAS disks\".\n                + How are your disks arranged for storage? Are you using RAID? If so, what RAID level(s)? What PostgreSQL data is on what disks / disk sets? What file system(s) are in use?                      # eg: \"Two disks in RAID 1, with all PostgreSQL data and programs stored on one ext3 file system.\"\n                      # eg: \"4 disks in RAID 5 holding the pg data directory on an ext3 file system. 2 disks in RAID 1 holding pg_clog, pg_xlog, the temporary tablespace, and the sort scratch space, also on ext3.\".\n                      # eg: \"Default Windows install of PostgreSQL\"                + In case of corruption data reports:                      # Have you had any unexpected power loss lately?                      # Have you run a file system check? (chkdsk / fsck)\n                      # Are there any error messages in the system logs? (unix/linux: \"dmesg\", \"/var/log/syslog\" ; Windows: Event Viewer in Control Panel -> Administrative Tools ) IBM SAN DS4700 Storage with Fibre Channel HDD (73.4 GB * 10) \nTwo Partitions - 73.4 GB * 3 RAID 5 - 134 GB storage partitions (One holding Jakarata tomcatapplication server and other holding Postgresql Database) .Four Hard disk RAID 5 with ext3 file systems hold the pgdata on SAN . \nHard disk rotational speed is 73 GB 15K IBM 2 GB Fibre channel No power loss, filesystem check also fine, No errors on /var/log/syslog Following is the output of TOP command during offpeak time. \ntop - 18:36:56 up 77 days, 20:33,  1 user,  load average: 12.99, 9.22, 10.37Tasks: 142 total,  12 running, 130 sleeping,   0 stopped,   0 zombieCpu(s): 46.1%us,  1.9%sy,  0.0%ni,  6.1%id,  3.0%wa,  0.0%hi,  0.1%si, 42.9%st\nMem:  16133676k total, 13657396k used,  2476280k free,   450908k buffersSwap: 14466492k total,      124k used, 14466368k free, 11590056k cached  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                           \n22458 postgres  19   0 2473m 477m 445m R   40  3.0   0:15.49 postmaster                                        22451 postgres  15   0 2442m 447m 437m S   33  2.8   0:30.44 postmaster                                        \n22464 postgres  17   0 2443m 397m 383m R   28  2.5   0:13.78 postmaster                                        22484 postgres  16   0 2448m 431m 412m S   20  2.7   0:02.73 postmaster                                        \n22465 postgres  17   0 2440m 461m 449m R   15  2.9   0:03.52 postmaster                                        22452 postgres  16   0 2450m 727m 706m R   13  4.6   0:23.46 postmaster                                        \n22476 postgres  16   0 2437m 413m 405m S   13  2.6   0:06.11 postmaster                                        22485 postgres  16   0 2439m 230m 222m R    7  1.5   0:05.72 postmaster                                        \n22481 postgres  15   0 2436m 175m 169m S    7  1.1   0:04.44 postmaster                                        22435 postgres  17   0 2438m 371m 361m R    6  2.4   1:17.92 postmaster                                        \n22440 postgres  17   0 2445m 497m 483m R    5  3.2   1:44.50 postmaster                                        22486 postgres  17   0 2432m  84m  81m R    4  0.5   0:00.76 postmaster                                        \n    3 root      34  19     0    0    0 R    0  0.0   1:47.50 ksoftirqd/0                                       4726 root      15   0 29540 8776 3428 S    0  0.1 140:02.98 X                                                 \n24950 root      15   0     0    0    0 S    0  0.0   0:30.96 pdflush                                              1 root      16   0   812  316  280 S    0  0.0   0:13.29 init                                              \n    2 root      RT   0     0    0    0 S    0  0.0   0:01.46 migration/0                                          4 root      RT   0     0    0    0 S    0  0.0   0:00.78 migration/1                                       \n    5 root      34  19     0    0    0 S    0  0.0   1:36.79 ksoftirqd/1                                          6 root      RT   0     0    0    0 S    0  0.0   0:01.46 migration/2                                       \n    7 root      34  19     0    0    0 R    0  0.0   1:49.83 ksoftirqd/2                                          8 root      RT   0     0    0    0 S    0  0.0   0:00.79 migration/3                                       \n    9 root      34  19     0    0    0 S    0  0.0   1:38.18 ksoftirqd/3                                         10 root      10  -5     0    0    0 S    0  0.0   1:02.11 events/0                                          \n   11 root      10  -5     0    0    0 S    0  0.0   1:03.27 events/1                                             12 root      10  -5     0    0    0 S    0  0.0   1:01.76 events/2                                          \n   13 root      10  -5     0    0    0 S    0  0.0   1:02.29 events/3                                             14 root      10  -5     0    0    0 S    0  0.0   0:00.01 khelper                                           \n 1016 root      10  -5     0    0    0 S    0  0.0   0:00.00 kthread                                           1054 root      10  -5     0    0    0 S    0  0.0   0:03.08 kblockd/0                                         \n 1055 root      10  -5     0    0    0 S    0  0.0   0:02.83 kblockd/1                                         1056 root      10  -5     0    0    0 S    0  0.0   0:03.19 kblockd/2                                         \nThe CPU Load shoots upto 40 during peak time. Following is my postgresql.conf (without comments) hba_file = '/var/lib/pgsql/data/pg_hba.conf'listen_addresses = '*'\t\nport = 5432max_connections = 1800 shared_buffers = 300000max_fsm_relations = 1000effective_cache_size = 200000log_destination = 'stderr'redirect_stderr = onlog_rotation_age = 0log_rotation_size = 10240\nsilent_mode = onlog_line_prefix = '%t %d %u '\tautovacuum = on\tdatestyle = 'iso, dmy'lc_messages = 'en_US.UTF-8'lc_monetary = 'en_US.UTF-8'\t\tlc_numeric = 'en_US.UTF-8'\t\t\nlc_time = 'en_US.UTF-8'\t\t\tUser Access Total Number of Users is 500 Maximum number of Concurrent users will be 500 during peak timeOff Peak time the maximum number of concurrent user will be around 150 to 200. \nPlease let me know your suggestions to improve the performance. RegardsShiva Raman", "msg_date": "Tue, 22 Sep 2009 19:24:44 +0530", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "On Tue, Sep 22, 2009 at 9:54 AM, Shiva Raman <[email protected]> wrote:\n> Dear all\n>\n>   I am having a problem of high cpu loads in my postgres server during peak\n> time. Following are the\n> details of my setup (details as per the postgres wiki) .\n>\n> * PostgreSQL version\n>          o Run \"select pg_version();\" in psql or PgAdmin III and provide the\n> full, exact output.\n>\n>\n> clusternode2:~ # rpm -qa | grep postgres\n> postgresql-devel-8.1.9-1.2\n> postgresql-8.1.9-1.2\n> postgresql-docs-8.1.9-1.2\n> postgresql-server-8.1.9-1.2\n> postgresql-libs-64bit-8.1.9-1.2\n> postgresql-libs-8.1.9-1.2\n> postgresql-jdbc-8.1-12.2\n> postgresql-contrib-8.1.9-1.2\n>\n>\n> * A description of what you are trying to achieve and what results you\n> expect.\n>\n> To keep the CPU Load below 10 , Now during peak times the load is nearing to\n> 40\n> At that time , it is not possible to access the data.\n>\n>    * The EXACT text of the query you ran, if any\n>\n>\n>    * The EXACT output of that query if it's short enough to be reasonable to\n> post\n>          o If you think the output is wrong, what you think should've been\n> produced instead\n>\n>    * The EXACT error message you get, if there is one\n>\n> As of now , i am unable to locate the exact query, the load shoots up\n> abnormally during\n> peak time is the main problem .\n>\n>\n>    * What program you're using to connect to PostgreSQL\n>\n>         Jakarta Tomcat - Struts with JSP\n>\n>\n>    * What version of the ODBC/JDBC driver you're using, if any\n>\n> postgresql-jdbc-8.1-12.2\n>\n>    * What you were doing when the error happened / how to cause the error.\n> Describe in as much detail as possible, step by step, including command\n> lines, SQL output, etc.\n>\n> When certain tables with more than 3 lakh items are concurrently accessed by\n> more than 300\n> users, the CPU load shoots up .\n>\n>    * Is there anything remotely unusual in the PostgreSQL server logs?\n>          o On Windows these are in your data directory. On a default\n> PostgreSQL install that'll be in C:\\Program Files\\PostgreSQL\\8.4\\data\\pg_log\n> (assuming you're using 8.4)\n>\n> The log file /var/log/postgresql has no data .\n>\n>          o On Linux this depends a bit on distro, but you'll usually find\n> them in /var/log/postgresql/.\n>    * Operating system and version\n>          o Linux users:\n>                + Linux distro and version\n>                + Kernel details (run \"uname -a\" on the terminal)\n>\n> SLES 10 SP3\n> clusternode2:~ # uname -a\n> Linux clusternode2 2.6.16.46-0.12-ppc64 #1 SMP Thu May 17 14:00:09 UTC 2007\n> ppc64 ppc64 ppc64 GNU/Linux\n>\n>\n>\n>    * What kind of hardware you have.\n>          o CPU manufacturer and model, eg \"AMD Athlon X2\" or \"Intel Core 2\n> Duo\"\n>          o Amount and size of RAM installed, eg \"2GB RAM\"\n>\n> High Availability Cluster with two IBM P Series Server and one DS4700\n> Storage\n>\n> IBM P series P52A with 2-core 2.1 Ghz POWER5+ Processor Card , 36 MB L3\n> Cache ,16 GB of RAM,\n> 73.4 GB 10,000 RPM Ultra320 SCSI Drive for Operating System .\n>\n>\n>\n>          o Storage details (important for performance and corruption\n> questions)\n>                + Do you use a RAID controller? If so, what type of\n> controller? eg \"3Ware Escalade 8500-8\"\n>                      # Does it have a battery backed cache module?\n>                      # Is write-back caching enabled?\n>                + Do you use software RAID? If so, what software and what\n> version? eg \"Linux software RAID (md) 2.6.18-5-686 SMP mod_unload 686\n> REGPARM gcc-4.1\".\n>                      # In the case of Linux software RAID you can get the\n> details from the \"modinfo md_mod\" command\n>                + Is your PostgreSQL database on a SAN?\n>                      # Who made it, what kind, etc? Provide what details you\n> can.\n>                + How many hard disks are connected to the system and what\n> types are they? You need to say more than just \"6 disks\". At least give\n> maker, rotational speed and interface type, eg \"6 15,000rpm Seagate SAS\n> disks\".\n>                + How are your disks arranged for storage? Are you using\n> RAID? If so, what RAID level(s)? What PostgreSQL data is on what disks /\n> disk sets? What file system(s) are in use?\n>                      # eg: \"Two disks in RAID 1, with all PostgreSQL data\n> and programs stored on one ext3 file system.\"\n>                      # eg: \"4 disks in RAID 5 holding the pg data directory\n> on an ext3 file system. 2 disks in RAID 1 holding pg_clog, pg_xlog, the\n> temporary tablespace, and the sort scratch space, also on ext3.\".\n>                      # eg: \"Default Windows install of PostgreSQL\"\n>                + In case of corruption data reports:\n>                      # Have you had any unexpected power loss lately?\n>                      # Have you run a file system check? (chkdsk / fsck)\n>                      # Are there any error messages in the system logs?\n> (unix/linux: \"dmesg\", \"/var/log/syslog\" ; Windows: Event Viewer in Control\n> Panel -> Administrative Tools )\n>\n>\n> IBM SAN DS4700 Storage with Fibre Channel HDD (73.4 GB * 10)\n> Two Partitions - 73.4 GB * 3 RAID 5 - 134 GB storage partitions (One holding\n> Jakarata tomcat\n> application server and other holding Postgresql Database) .\n> Four Hard disk RAID 5 with ext3 file systems hold the pgdata on SAN .\n> Hard disk rotational speed is 73 GB 15K IBM 2 GB Fibre channel\n>\n> No power loss, filesystem check also fine, No errors on /var/log/syslog\n>\n> Following is the output of TOP command during offpeak time.\n>\n>\n> top - 18:36:56 up 77 days, 20:33,  1 user,  load average: 12.99, 9.22, 10.37\n> Tasks: 142 total,  12 running, 130 sleeping,   0 stopped,   0 zombie\n> Cpu(s): 46.1%us,  1.9%sy,  0.0%ni,  6.1%id,  3.0%wa,  0.0%hi,  0.1%si,\n> 42.9%st\n> Mem:  16133676k total, 13657396k used,  2476280k free,   450908k buffers\n> Swap: 14466492k total,      124k used, 14466368k free, 11590056k cached\n>\n>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n>\n> 22458 postgres  19   0 2473m 477m 445m R   40  3.0   0:15.49 postmaster\n>\n> 22451 postgres  15   0 2442m 447m 437m S   33  2.8   0:30.44 postmaster\n>\n> 22464 postgres  17   0 2443m 397m 383m R   28  2.5   0:13.78 postmaster\n>\n> 22484 postgres  16   0 2448m 431m 412m S   20  2.7   0:02.73 postmaster\n>\n> 22465 postgres  17   0 2440m 461m 449m R   15  2.9   0:03.52 postmaster\n>\n> 22452 postgres  16   0 2450m 727m 706m R   13  4.6   0:23.46 postmaster\n>\n> 22476 postgres  16   0 2437m 413m 405m S   13  2.6   0:06.11 postmaster\n>\n> 22485 postgres  16   0 2439m 230m 222m R    7  1.5   0:05.72 postmaster\n>\n> 22481 postgres  15   0 2436m 175m 169m S    7  1.1   0:04.44 postmaster\n>\n> 22435 postgres  17   0 2438m 371m 361m R    6  2.4   1:17.92 postmaster\n>\n> 22440 postgres  17   0 2445m 497m 483m R    5  3.2   1:44.50 postmaster\n>\n> 22486 postgres  17   0 2432m  84m  81m R    4  0.5   0:00.76 postmaster\n>\n>    3 root      34  19     0    0    0 R    0  0.0   1:47.50 ksoftirqd/0\n>\n> 4726 root      15   0 29540 8776 3428 S    0  0.1 140:02.98 X\n>\n> 24950 root      15   0     0    0    0 S    0  0.0   0:30.96 pdflush\n>\n>    1 root      16   0   812  316  280 S    0  0.0   0:13.29 init\n>\n>    2 root      RT   0     0    0    0 S    0  0.0   0:01.46 migration/0\n>\n>    4 root      RT   0     0    0    0 S    0  0.0   0:00.78 migration/1\n>\n>    5 root      34  19     0    0    0 S    0  0.0   1:36.79 ksoftirqd/1\n>\n>    6 root      RT   0     0    0    0 S    0  0.0   0:01.46 migration/2\n>\n>    7 root      34  19     0    0    0 R    0  0.0   1:49.83 ksoftirqd/2\n>\n>    8 root      RT   0     0    0    0 S    0  0.0   0:00.79 migration/3\n>\n>    9 root      34  19     0    0    0 S    0  0.0   1:38.18 ksoftirqd/3\n>\n>   10 root      10  -5     0    0    0 S    0  0.0   1:02.11 events/0\n>\n>   11 root      10  -5     0    0    0 S    0  0.0   1:03.27 events/1\n>\n>   12 root      10  -5     0    0    0 S    0  0.0   1:01.76 events/2\n>\n>   13 root      10  -5     0    0    0 S    0  0.0   1:02.29 events/3\n>\n>   14 root      10  -5     0    0    0 S    0  0.0   0:00.01 khelper\n>\n> 1016 root      10  -5     0    0    0 S    0  0.0   0:00.00 kthread\n>\n> 1054 root      10  -5     0    0    0 S    0  0.0   0:03.08 kblockd/0\n>\n> 1055 root      10  -5     0    0    0 S    0  0.0   0:02.83 kblockd/1\n>\n> 1056 root      10  -5     0    0    0 S    0  0.0   0:03.19 kblockd/2\n>\n>\n>\n>\n> The CPU Load shoots upto 40 during peak time.\n>\n> Following is my postgresql.conf (without comments)\n>\n> hba_file = '/var/lib/pgsql/data/pg_hba.conf'\n> listen_addresses = '*'\n> port = 5432\n> max_connections = 1800\n> shared_buffers = 300000\n> max_fsm_relations = 1000\n> effective_cache_size = 200000\n> log_destination = 'stderr'\n> redirect_stderr = on\n> log_rotation_age = 0\n> log_rotation_size = 10240\n> silent_mode = onlog_line_prefix = '%t %d %u '\n> autovacuum = on\n> datestyle = 'iso, dmy'\n> lc_messages = 'en_US.UTF-8'\n> lc_monetary = 'en_US.UTF-8'\n> lc_numeric = 'en_US.UTF-8'\n> lc_time = 'en_US.UTF-8'\n>\n> User Access\n> Total Number of Users is 500\n> Maximum number of Concurrent users will be 500 during peak time\n> Off Peak time the maximum number of concurrent user will be around 150 to\n> 200.\n>\n>\n> Please let me know your suggestions to improve the performance.\n\nThe very first step is to determine if you are cpu bound or i/o bound.\n You need to monitor top or vmstat during high load period and report\nthe results here. Is the DS4700 direct attached? Sometimes using a\nSAN can throw the iowait numbers off a bit. I bet you are simply\nunderpowered in I/O department.\n\nmerlin\n", "msg_date": "Tue, 22 Sep 2009 10:18:11 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Shiva Raman wrote:\n> Dear all\n> \n> I am having a problem of high cpu loads in my postgres server during \n> peak time. Following are the\n> details of my setup (details as per the postgres wiki) .\n> \n> \n> *Following is the output of TOP command during offpeak time.*\n> \n> \n> top - 18:36:56 up 77 days, 20:33, 1 user, load average: 12.99, 9.22, 10.37\n> Tasks: 142 total, 12 running, 130 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 46.1%us, 1.9%sy, 0.0%ni, 6.1%id, 3.0%wa, 0.0%hi, 0.1%si, \n> 42.9%st\n> Mem: 16133676k total, 13657396k used, 2476280k free, 450908k buffers\n> Swap: 14466492k total, 124k used, 14466368k free, 11590056k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n> \n> 22458 postgres 19 0 2473m 477m 445m R 40 3.0 0:15.49 postmaster \n> \n> 22451 postgres 15 0 2442m 447m 437m S 33 2.8 0:30.44 postmaster \n> \n> 22464 postgres 17 0 2443m 397m 383m R 28 2.5 0:13.78 postmaster \n> \n> 22484 postgres 16 0 2448m 431m 412m S 20 2.7 0:02.73 postmaster \n> \n> 22465 postgres 17 0 2440m 461m 449m R 15 2.9 0:03.52 postmaster \n> \n> 22452 postgres 16 0 2450m 727m 706m R 13 4.6 0:23.46 postmaster \n> \n> 22476 postgres 16 0 2437m 413m 405m S 13 2.6 0:06.11 postmaster \n> \n> 22485 postgres 16 0 2439m 230m 222m R 7 1.5 0:05.72 postmaster \n> \n> 22481 postgres 15 0 2436m 175m 169m S 7 1.1 0:04.44 postmaster \n> \n> 22435 postgres 17 0 2438m 371m 361m R 6 2.4 1:17.92 postmaster \n> \n> 22440 postgres 17 0 2445m 497m 483m R 5 3.2 1:44.50 postmaster \n> \n> 22486 postgres 17 0 2432m 84m 81m R 4 0.5 0:00.76 postmaster \n> \n\n\nFirst off, nice report.\n\nI see you are on a pretty old version of pg. Are you vacuuming regularly?\n\nIf you run a 'ps ax|grep post' do you see anything that says 'idle in \ntransaction'? (I hope that old of version will show it. my processes \nshow up as postgres not postmaster)\n\nThe top looks like you are cpu bound. Have you tried enabling logging \nslow queries? (again, I hope your version supports that) It could be \nyou have a query or two that are not using indexes, and slowing \neverything down.\n\nAlso on the top, it has this: 42.9%st. Are you in a vm? or running \nvm's on the box?\n\nIts weird, you have 6.1% idle and 3.0% waiting for disk and yet you have \na load of 13. Load usually means somebody is waiting for something. \nBut you have a little cpu idle time... and you have very low disk \nwaits... you are using very little swap. hum... odd...\n\n-Andy\n", "msg_date": "Tue, 22 Sep 2009 09:19:46 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Andy Colson wrote:\n> Shiva Raman wrote:\n>> Dear all\n>>\n>> I am having a problem of high cpu loads in my postgres server during \n>> peak time. Following are the\n>> details of my setup (details as per the postgres wiki) .\n>>\n>>\n>> *Following is the output of TOP command during offpeak time.*\n>>\n>>\n>> top - 18:36:56 up 77 days, 20:33, 1 user, load average: 12.99, 9.22, \n>> 10.37\n>> Tasks: 142 total, 12 running, 130 sleeping, 0 stopped, 0 zombie\n>> Cpu(s): 46.1%us, 1.9%sy, 0.0%ni, 6.1%id, 3.0%wa, 0.0%hi, 0.1%si, \n>> 42.9%st\n>> Mem: 16133676k total, 13657396k used, 2476280k free, 450908k buffers\n>> Swap: 14466492k total, 124k used, 14466368k free, 11590056k cached\n>>\n> \n> \n> First off, nice report.\n> \n> I see you are on a pretty old version of pg. Are you vacuuming regularly?\n> \n> If you run a 'ps ax|grep post' do you see anything that says 'idle in \n> transaction'? (I hope that old of version will show it. my processes \n> show up as postgres not postmaster)\n> \n> The top looks like you are cpu bound. Have you tried enabling logging \n> slow queries? (again, I hope your version supports that) It could be \n> you have a query or two that are not using indexes, and slowing \n> everything down.\n> \n> Also on the top, it has this: 42.9%st. Are you in a vm? or running \n> vm's on the box?\n> \n> Its weird, you have 6.1% idle and 3.0% waiting for disk and yet you have \n> a load of 13. Load usually means somebody is waiting for something. But \n> you have a little cpu idle time... and you have very low disk waits... \n> you are using very little swap. hum... odd...\n> \n> -Andy\n> \n\nLooks like I missed an important point. You said this was top during \noff peak time. So ignore my high load ramblings.\n\nBut... if this is off peak, and you only have 6% idle cpu... I'd say \nyour cpu bound. (I'm still not sure what the 42.9%st is, so maybe I'm \noff base with the 6% idle too)\n\n-Andy\n", "msg_date": "Tue, 22 Sep 2009 09:35:26 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: Shiva Raman\n> Enviado el: Martes, 22 de Septiembre de 2009 10:55\n> Para: [email protected]\n> Asunto: [PERFORM] High CPU load on Postgres Server during \n> Peak times!!!!\n> \n> Dear all \n> \n> I am having a problem of high cpu loads in my postgres \n> server during peak time. \n\n\nSome quick advice:\n\n> \n> clusternode2:~ # rpm -qa | grep postgres\n> postgresql-devel-8.1.9-1.2\n> postgresql-8.1.9-1.2\n> postgresql-docs-8.1.9-1.2\n> postgresql-server-8.1.9-1.2\n> postgresql-libs-64bit-8.1.9-1.2\n> postgresql-libs-8.1.9-1.2\n> postgresql-jdbc-8.1-12.2\n> postgresql-contrib-8.1.9-1.2\n> \n> \n\n8.1 is quite old. Consider upgrading as newer versions are faster.\nCurrent Postgres version is 8.4. \n\n> \n> High Availability Cluster with two IBM P Series Server and \n> one DS4700 Storage\n> \n> IBM P series P52A with 2-core 2.1 Ghz POWER5+ Processor Card \n> , 36 MB L3 Cache ,16 GB of RAM,\n> 73.4 GB 10,000 RPM Ultra320 SCSI Drive for Operating System . \n> \n\nSounds you are underpowered on cpu for 500 concurrent users.\nOf course this really depends on what they are doing.\n\n> \n> IBM SAN DS4700 Storage with Fibre Channel HDD (73.4 GB * 10) \n> Two Partitions - 73.4 GB * 3 RAID 5 - 134 GB storage \n> partitions (One holding Jakarata tomcat\n> application server and other holding Postgresql Database) .\n> Four Hard disk RAID 5 with ext3 file systems hold the pgdata on SAN . \n> Hard disk rotational speed is 73 GB 15K IBM 2 GB Fibre channel \n> \n\nA more suitable partitioning for an OLTP database would be:\n\n2 x 73.4 GB RAID 1 for App Server + Postgresql and pg_xlog\n8 x 73.4 GB RAID 10 for pgdata\n\nRAID 5 is strongly discouraged.\n\n> \n> Following is the output of TOP command during offpeak time. \n> \n> \n> top - 18:36:56 up 77 days, 20:33, 1 user, load average: \n> 12.99, 9.22, 10.37\n> Tasks: 142 total, 12 running, 130 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 46.1%us, 1.9%sy, 0.0%ni, 6.1%id, 3.0%wa, 0.0%hi, \n> 0.1%si, 42.9%st\n> Mem: 16133676k total, 13657396k used, 2476280k free, \n> 450908k buffers\n> Swap: 14466492k total, 124k used, 14466368k free, \n> 11590056k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ \n> COMMAND \n> 22458 postgres 19 0 2473m 477m 445m R 40 3.0 0:15.49 \n> postmaster \n> 22451 postgres 15 0 2442m 447m 437m S 33 2.8 0:30.44 \n> postmaster \n> 22464 postgres 17 0 2443m 397m 383m R 28 2.5 0:13.78 \n> postmaster \n> 22484 postgres 16 0 2448m 431m 412m S 20 2.7 0:02.73 \n> postmaster \n> 22465 postgres 17 0 2440m 461m 449m R 15 2.9 0:03.52 \n> postmaster \n> 22452 postgres 16 0 2450m 727m 706m R 13 4.6 0:23.46 \n> postmaster \n> 22476 postgres 16 0 2437m 413m 405m S 13 2.6 0:06.11 \n> postmaster \n> 22485 postgres 16 0 2439m 230m 222m R 7 1.5 0:05.72 \n> postmaster \n> 22481 postgres 15 0 2436m 175m 169m S 7 1.1 0:04.44 \n> postmaster \n> 22435 postgres 17 0 2438m 371m 361m R 6 2.4 1:17.92 \n> postmaster \n> 22440 postgres 17 0 2445m 497m 483m R 5 3.2 1:44.50 \n> postmaster \n> 22486 postgres 17 0 2432m 84m 81m R 4 0.5 0:00.76 \n> postmaster \n> \n\nAre you running several Postgres clusters on this hardware?\nPlease post Top output showing cmd line arguments (press 'c')\n\n\n> \n> User Access \n> Total Number of Users is 500 \n> Maximum number of Concurrent users will be 500 during peak time\n> Off Peak time the maximum number of concurrent user will be \n> around 150 to 200. \n> \n\nA connection pooler like pgpool or pgbouncer would considerably reduce the\nburden on your system.\n\n\nRegards,\nFernando.\n\n", "msg_date": "Tue, 22 Sep 2009 12:29:52 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Hi\n\n\nThanks a lot for the reply.\n\n\n *I see you are on a pretty old version of pg. Are you vacuuming regularly?*\n\n\n Yes, Vaccuuming is done every day morning at 06 am\n\nIt is running perfectly fine.\n\n\n *\n*\n\n*If you run a 'ps ax|grep post' do you see anything that says 'idle in\ntransaction'? (I hope that old of version will show it. my processes show up\nas postgres not postmaster)*\n\n\n Lots of requests shows as 'idle in transaction'.\n\n\n Currently i am restarting the database using a cron job every 30 minutes\nduring offpeak time\n\nand every 15 minutes during the peak time.\n\n\n The top looks like you are cpu bound.\n\n\n\n *Have you tried enabling logging slow queries? (again, I hope your version\nsupports that) It could be you have a query or two that are not using\nindexes, and slowing everything down.*\n\n\n\nExactly right, thanks for the tip.\n\nI indexed few tables frequently accessed which are not indexed. After\nindexing the load has come down to 50 % during Peak time its between 10 and\n20 and during offpeak its between 4 and 8 .\n\nThe PowerPC cpu is having some virtual layer that is shown in the Steal\nvalue.\n\n\n *Its weird, you have 6.1% idle and 3.0% waiting for disk and yet you have a\nload of 13. Load usually means somebody is waiting for something. But you\nhave a little cpu idle time... and you have very low disk waits... you are\nusing very little swap. hum... odd...*\n\n\n\n As per the concurrency of 300 to 400 users, the following parameters are\nchanged in\n\npostgresql conf based on the calculation provided in the postgresql\ndocumentation.\n\n\n\n Max connections = 1800 ( Too much open connections will result in unwanted\nmemory wastage)\n\nShared Buffers = 375 000 ( 375000 * 8 * 1024 /100 = 3072 MB ) # proposed\nvalue is 1/4 the actual memory\n\nEffective Cache Size = 266000 ( 266000 * 8 * 1024 /100 = 2179 MB ) #\nproposed value is 1/3 memory after OS Allocation\n\nwork_mem = 3000 ( 3000 * max connections * 1024 = 3000 * 1800 * 1024 = 5529\nMB ( this is the working memory for postgres) )\n\nmax_fsm_pages = 20000 ( This has to be analyzed and can be increased to\n40000, this can be done after one or two day observation)\n\n\n Postgresql.conf\n\n---------------\n\n\n hba_file = '/var/lib/pgsql/data/pg_hba.conf'\n\nlisten_addresses = '*'\n\nport = 5432\n\nmax_connections = 1800\n\nshared_buffers = 300000\n\nmax_fsm_relations = 1000\n\neffective_cache_size = 200000\n\nlog_destination = 'stderr'\n\nredirect_stderr = on\n\nlog_rotation_age = 0\n\nlog_rotation_size = 10240\n\nsilent_mode = onlog_line_prefix = '%t %d %u '\n\nautovacuum = on\n\ndatestyle = 'iso, dmy'\n\nlc_messages = 'en_US.UTF-8'\n\nlc_monetary = 'en_US.UTF-8'\n\nlc_numeric = 'en_US.UTF-8'\n\nlc_time = 'en_US.UTF-8'\n\n\n Any modifications i have to do in this values ?\n\n\n Regds\n\n\n Shiva Raman .\n\n\nHi Thanks a lot for the reply. \n\n\n\nI see you are on a pretty old version\nof pg. Are you vacuuming regularly?\n\n\nYes, Vaccuuming\nis done every day morning at 06 am \n\nIt is running\nperfectly fine. \n\n\n\n\n\nIf you run a 'ps ax|grep post' do you\nsee anything that says 'idle in transaction'? (I hope that old of\nversion will show it. my processes show up as postgres not\npostmaster)\n\n\nLots of requests\nshows as 'idle in transaction'.\n\n\nCurrently i am\nrestarting the database using a cron job every 30 minutes during\noffpeak time\nand every 15\nminutes during the peak time. \n\n\n\nThe top looks\nlike you are cpu bound.\n\n\n\n\n\nHave you tried enabling logging slow\nqueries? (again, I hope your version supports that) It could be you\nhave a query or two that are not using indexes, and slowing\neverything down.\n\n\nExactly right, thanks for the tip.\n\nI indexed few\ntables frequently accessed which are not indexed. After indexing the\nload has come down to 50 % during Peak time its between 10 and 20 and\nduring offpeak its between 4 and 8 . \n\nThe PowerPC cpu\nis having some virtual layer that is shown in the Steal value. \n\n\n\nIts weird, you\nhave 6.1% idle and 3.0% waiting for disk and yet you have a load of\n13. Load usually means somebody is waiting for something. But you\nhave a little cpu idle time... and you have very low disk waits...\nyou are using very little swap. hum... odd...\n\n\n\n\nAs per the concurrency of 300 to 400\nusers, the following parameters are changed in \n\npostgresql conf based on the\ncalculation provided in the postgresql documentation. \n\n\n\n\n\nMax connections = 1800 ( Too much open\nconnections will result in unwanted memory wastage) \n\nShared Buffers = 375 000 ( 375000 * 8 *\n1024 /100 = 3072 MB ) # proposed value is 1/4 the actual memory \n\nEffective Cache Size = 266000 ( 266000\n* 8 * 1024 /100 = 2179 MB ) # proposed value is 1/3 memory after OS\nAllocation \n\nwork_mem = 3000 ( 3000 * max\nconnections * 1024 = 3000 * 1800 * 1024 = 5529 MB ( this is the\nworking memory for postgres) )\nmax_fsm_pages = 20000 ( This has to be\nanalyzed and can be increased to 40000, this can be done after one or\ntwo day observation)\n\n\nPostgresql.conf \n\n---------------\n\n\nhba_file =\n'/var/lib/pgsql/data/pg_hba.conf'\nlisten_addresses = '*'\nport = 5432\nmax_connections = 1800\nshared_buffers = 300000\nmax_fsm_relations = 1000\neffective_cache_size = 200000\nlog_destination = 'stderr'\nredirect_stderr = on\nlog_rotation_age = 0\nlog_rotation_size = 10240\nsilent_mode = onlog_line_prefix = '%t\n%d %u '\nautovacuum = on\ndatestyle = 'iso, dmy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8' \n\n\n\nAny modifications i have to do in this\nvalues ? \n\n\n\nRegds\n\n\nShiva Raman .", "msg_date": "Thu, 24 Sep 2009 02:25:07 +0800", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Hi\n\nThanks for your mail.\n\n*Some quick advice:*\n\n*\n*\n\n*>*\n\n*> clusternode2:~ # rpm -qa | grep postgres*\n\n*> postgresql-devel-8.1.9-1.2*\n\n*> postgresql-8.1.9-1.2*\n\n*> postgresql-docs-8.1.9-1.2*\n\n*> postgresql-server-8.1.9-1.2*\n\n*> postgresql-libs-64bit-8.1.9-1.2*\n\n*> postgresql-libs-8.1.9-1.2*\n\n*> postgresql-jdbc-8.1-12.2*\n\n*> postgresql-contrib-8.1.9-1.2*\n\n*>*\n\n*>*\n\n\n 8.1 is quite old. Consider upgrading as newer versions are faster.\n\nCurrent Postgres version is 8.4.\n\n\n >\n\n*> High Availability Cluster with two IBM P Series Server and*\n\n*> one DS4700 Storage*\n\n*>*\n\n*> IBM P series P52A with 2-core 2.1 Ghz POWER5+ Processor Card*\n\n*> , 36 MB L3 Cache ,16 GB of RAM,*\n\n*> 73.4 GB 10,000 RPM Ultra320 SCSI Drive for Operating System .*\n\n*>*\n\n*\n*\n\n*Sounds you are underpowered on cpu for 500 concurrent users.*\n\n*Of course this really depends on what they are doing.*\n\n*\n*\n\n*>*\n\n*> IBM SAN DS4700 Storage with Fibre Channel HDD (73.4 GB * 10)*\n\n*> Two Partitions - 73.4 GB * 3 RAID 5 - 134 GB storage*\n\n*> partitions (One holding Jakarata tomcat*\n\n*> application server and other holding Postgresql Database) .*\n\n*> Four Hard disk RAID 5 with ext3 file systems hold the pgdata on SAN .*\n\n*> Hard disk rotational speed is 73 GB 15K IBM 2 GB Fibre channel*\n\n*>*\n\n*\n*\n\n*A more suitable partitioning for an OLTP database would be:*\n\n*\n*\n\n*2 x 73.4 GB RAID 1 for App Server + Postgresql and pg_xlog*\n\n*8 x 73.4 GB RAID 10 for pgdata*\n\n*\n*\n\n*RAID 5 is strongly discouraged.*\n\n*- Show quoted text -*\n\n*\n*\n\n*>*\n\n*> Following is the output of TOP command during offpeak time.*\n\n*>*\n\n*>*\n\n*> top - 18:36:56 up 77 days, 20:33, 1 user, load average:*\n\n*> 12.99, 9.22, 10.37*\n\n*> Tasks: 142 total, 12 running, 130 sleeping, 0 stopped, 0 zombie*\n\n*> Cpu(s): 46.1%us, 1.9%sy, 0.0%ni, 6.1%id, 3.0%wa, 0.0%hi,*\n\n*> 0.1%si, 42.9%st*\n\n*> Mem: 16133676k total, 13657396k used, 2476280k free,*\n\n*> 450908k buffers*\n\n*> Swap: 14466492k total, 124k used, 14466368k free,*\n\n*> 11590056k cached*\n\n*>*\n\n*> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+*\n\n*> COMMAND*\n\n*> 22458 postgres 19 0 2473m 477m 445m R 40 3.0 0:15.49*\n\n*> postmaster*\n\n*> 22451 postgres 15 0 2442m 447m 437m S 33 2.8 0:30.44*\n\n*> postmaster*\n\n*> 22464 postgres 17 0 2443m 397m 383m R 28 2.5 0:13.78*\n\n*> postmaster*\n\n*> 22484 postgres 16 0 2448m 431m 412m S 20 2.7 0:02.73*\n\n*> postmaster*\n\n*> 22465 postgres 17 0 2440m 461m 449m R 15 2.9 0:03.52*\n\n*> postmaster*\n\n*> 22452 postgres 16 0 2450m 727m 706m R 13 4.6 0:23.46*\n\n*> postmaster*\n\n*> 22476 postgres 16 0 2437m 413m 405m S 13 2.6 0:06.11*\n\n*> postmaster*\n\n*> 22485 postgres 16 0 2439m 230m 222m R 7 1.5 0:05.72*\n\n*> postmaster*\n\n*> 22481 postgres 15 0 2436m 175m 169m S 7 1.1 0:04.44*\n\n*> postmaster*\n\n*> 22435 postgres 17 0 2438m 371m 361m R 6 2.4 1:17.92*\n\n*> postmaster*\n\n*> 22440 postgres 17 0 2445m 497m 483m R 5 3.2 1:44.50*\n\n*> postmaster*\n\n*> 22486 postgres 17 0 2432m 84m 81m R 4 0.5 0:00.76*\n\n*> postmaster*\n\n*>*\n\n*\n*\n\n*Are you running several Postgres clusters on this hardware?*\n\n*Please post Top output showing cmd line arguments (press 'c')*\n\n\n\n NO Only single Postgres instance\n\n\n >\n\n> User Access\n\n> Total Number of Users is 500\n\n> Maximum number of Concurrent users will be 500 during peak time\n\n> Off Peak time the maximum number of concurrent user will be\n\n> around 150 to 200.\n\n>\n\n*\n*\n\n*A connection pooler like pgpool or pgbouncer would considerably reduce the*\n\n*burden on your system.*\n\n\n\n I am already using connection pooling in tomcat web server, so installing\npgpool\n\nwill help enhancing the performance ?Any changes i have to do in my\napplication to\n\ninclude pgpool?\n\n\n Regds\n\n\nShiva raman\n\nHi Thanks for your mail. \nSome quick advice:\n\n\n>\n> clusternode2:~ # rpm -qa | grep\npostgres\n> postgresql-devel-8.1.9-1.2\n> postgresql-8.1.9-1.2\n> postgresql-docs-8.1.9-1.2\n> postgresql-server-8.1.9-1.2\n> postgresql-libs-64bit-8.1.9-1.2\n> postgresql-libs-8.1.9-1.2\n> postgresql-jdbc-8.1-12.2\n> postgresql-contrib-8.1.9-1.2\n>\n>\n\n\n8.1 is quite old. Consider upgrading as\nnewer versions are faster.\nCurrent Postgres version is 8.4.\n\n\n>\n> High Availability Cluster with two\nIBM P Series Server and\n> one DS4700 Storage\n>\n> IBM P series P52A with 2-core 2.1\nGhz POWER5+ Processor Card\n> , 36 MB L3 Cache ,16 GB of RAM,\n> 73.4 GB 10,000 RPM Ultra320 SCSI\nDrive for Operating System .\n>\n\n\nSounds you are underpowered on cpu for\n500 concurrent users.\nOf course this really depends on what\nthey are doing.\n\n\n>\n> IBM SAN DS4700 Storage with Fibre\nChannel HDD (73.4 GB * 10)\n> Two Partitions - 73.4 GB * 3 RAID\n5 - 134 GB storage\n> partitions (One holding Jakarata\ntomcat\n> application server and other\nholding Postgresql Database) .\n> Four Hard disk RAID 5 with ext3\nfile systems hold the pgdata on SAN .\n> Hard disk rotational speed is 73\nGB 15K IBM 2 GB Fibre channel\n>\n\n\nA more suitable partitioning for an\nOLTP database would be:\n\n\n2 x 73.4 GB RAID 1 for App Server +\nPostgresql and pg_xlog\n8 x 73.4 GB RAID 10 for pgdata\n\n\nRAID 5 is strongly discouraged.\n- Show quoted text -\n\n\n>\n> Following is the output of TOP\ncommand during offpeak time.\n>\n>\n> top - 18:36:56 up 77 days, 20:33, \n1 user, load average:\n> 12.99, 9.22, 10.37\n> Tasks: 142 total, 12 running, 130\nsleeping, 0 stopped, 0 zombie\n> Cpu(s): 46.1%us, 1.9%sy, 0.0%ni,\n 6.1%id, 3.0%wa, 0.0%hi,\n> 0.1%si, 42.9%st\n> Mem: 16133676k total, 13657396k\nused, 2476280k free,\n> 450908k buffers\n> Swap: 14466492k total, 124k\nused, 14466368k free,\n> 11590056k cached\n>\n> PID USER PR NI VIRT RES \nSHR S %CPU %MEM TIME+\n> COMMAND\n> 22458 postgres 19 0 2473m 477m\n445m R 40 3.0 0:15.49\n> postmaster\n> 22451 postgres 15 0 2442m 447m\n437m S 33 2.8 0:30.44\n> postmaster\n> 22464 postgres 17 0 2443m 397m\n383m R 28 2.5 0:13.78\n> postmaster\n> 22484 postgres 16 0 2448m 431m\n412m S 20 2.7 0:02.73\n> postmaster\n> 22465 postgres 17 0 2440m 461m\n449m R 15 2.9 0:03.52\n> postmaster\n> 22452 postgres 16 0 2450m 727m\n706m R 13 4.6 0:23.46\n> postmaster\n> 22476 postgres 16 0 2437m 413m\n405m S 13 2.6 0:06.11\n> postmaster\n> 22485 postgres 16 0 2439m 230m\n222m R 7 1.5 0:05.72\n> postmaster\n> 22481 postgres 15 0 2436m 175m\n169m S 7 1.1 0:04.44\n> postmaster\n> 22435 postgres 17 0 2438m 371m\n361m R 6 2.4 1:17.92\n> postmaster\n> 22440 postgres 17 0 2445m 497m\n483m R 5 3.2 1:44.50\n> postmaster\n> 22486 postgres 17 0 2432m 84m \n81m R 4 0.5 0:00.76\n> postmaster\n>\n\n\nAre you running several Postgres\nclusters on this hardware?\nPlease post Top output showing cmd line\narguments (press 'c')\n\n\n\n\nNO Only single Postgres instance\n\n\n>\n> User Access\n> Total Number of Users is 500\n> Maximum number of Concurrent users\nwill be 500 during peak time\n> Off Peak time the maximum number\nof concurrent user will be\n> around 150 to 200.\n>\n\n\nA connection pooler like pgpool or\npgbouncer would considerably reduce the\nburden on your system.\n\n\n\n\nI am already using connection pooling\nin tomcat web server, so installing  pgpool\nwill help enhancing the performance ?Any changes i have to do in my application to include pgpool? \n\n\nRegdsShiva raman", "msg_date": "Thu, 24 Sep 2009 02:28:39 +0800", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Shiva Raman wrote:\n> /If you run a 'ps ax|grep post' do you see anything that says 'idle in \n> transaction'? (I hope that old of version will show it. my processes \n> show up as postgres not postmaster)/\n> \n> \n> Lots of requests shows as 'idle in transaction'.\n> \n\nEww. I think that's bad. A connection that has a transaction open will \ncause lots of row versions, which use up ram, and make it slower to step \nthrough the table (even with an index). You really need to fix up your \ncode and make sure you commit transactions. (any statement (select, \ninsert, update) will start a new transaction that you need to explicitly \ncommit).\n\n\n> \n> Currently i am restarting the database using a cron job every 30 minutes \n> during offpeak time\n> \n> and every 15 minutes during the peak time.\n\ndo you get lots of update/deletes? Or are there mostly selects? If its \nmostly update/delete then the 'idle in transactions' is killing you. If \nyou have mostly selects then its probably something else.\n\n\n> work_mem = 3000 ( 3000 * max connections * 1024 = 3000 * 1800 * 1024 = \n> 5529 MB ( this is the working memory for postgres) )\n\nwork_mem is per connection. If you changed this to get a better query \nplan then ok, but dont change it just for the sake of changing it. \nIck... I just went back and checked, you have 16G of ram... this \nprobably isn't a problem. Nevermind.\n\n\n-Andy\n", "msg_date": "Wed, 23 Sep 2009 13:53:14 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": ">>>\n>>> User Access\n>>> Total Number of Users is 500\n>>> Maximum number of Concurrent users will be 500 during peak time\n>>> Off Peak time the maximum number of concurrent user will be\n>>> around 150 to 200.\n>>>\n>>\n>>A connection pooler like pgpool or pgbouncer would considerably reduce the\n>>burden on your system.\n>>\n>\n>I am already using connection pooling in tomcat web server, so installing\npgpool\n>will help enhancing the performance ?Any changes i have to do in my\napplication to \n>include pgpool? \n>\n\nThere shouldn't be need for another pooling solution.\nAnyway, you probably dont want 1800 concurrent connections on your database\nserver, nor even get near that number.\n\nCheck the number of actual connections with: \n select count(*) from pg_stat_activity;\n\nA vmstat run during high loads could provide a hindsight to if the number of\nconnections is straining your server.\n\nIf the number of connections is high (say over 200-300), try reducing the\npool size in Tomcat and see what happens.\nYou possibly could do fine with something between 50 and 100 connections.\n\n\nRegards,\nFernando.\n\n", "msg_date": "Wed, 23 Sep 2009 17:50:08 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Fernando Hevia wrote:\n>>>> User Access\n>>>> Total Number of Users is 500\n>>>> Maximum number of Concurrent users will be 500 during peak time\n>>>> Off Peak time the maximum number of concurrent user will be\n>>>> around 150 to 200.\n>>>>\n>>>> \n>>> A connection pooler like pgpool or pgbouncer would considerably reduce the\n>>> burden on your system.\n>>>\n>>> \n>> I am already using connection pooling in tomcat web server, so installing\n>> \n> pgpool\n> \n>> will help enhancing the performance ?Any changes i have to do in my\n>> \n> application to \n> \n>> include pgpool? \n>>\n>> \n>\n> There shouldn't be need for another pooling solution.\n> Anyway, you probably dont want 1800 concurrent connections on your database\n> server, nor even get near that number.\n>\n> Check the number of actual connections with: \n> select count(*) from pg_stat_activity;\n>\n> A vmstat run during high loads could provide a hindsight to if the number of\n> connections is straining your server.\n>\n> If the number of connections is high (say over 200-300), try reducing the\n> pool size in Tomcat and see what happens.\n> You possibly could do fine with something between 50 and 100 connections.\n>\n> \nI can second this - I have an EXTREMELY busy forum system using pgpool\nand during peak hours it runs very well within around 100 connections in\nuse.\n\n-- Karl", "msg_date": "Wed, 23 Sep 2009 15:52:01 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "On Wed, Sep 23, 2009 at 12:25 PM, Shiva Raman <[email protected]> wrote:\n\nFirst let me say that upgrading to a later version is likely going to\nhelp as much as anything else you're likely to pick up from this\ndiscussion. Not that this discussion isn't worthwhile, it is.\n\n> If you run a 'ps ax|grep post' do you see anything that says 'idle in\n> transaction'? (I hope that old of version will show it. my processes show up\n> as postgres not postmaster)\n>\n> Lots of requests shows as 'idle in transaction'.\n>\n> Currently i am restarting the database using a cron job every 30 minutes\n> during offpeak time\n>\n> and every 15 minutes during the peak time.\n\nWow. It'd be way better if you could fix your application /\nconnection layer to not do that.\n\n> As per the concurrency of 300 to 400 users, the following parameters are\n> changed in\n>\n> postgresql conf based on the calculation provided in the postgresql\n> documentation.\n>\n> Max connections = 1800 ( Too much open connections will result in unwanted\n> memory wastage)\n\nThis is very high. If you only need 400 users, you might want to\nconsider setting this to 500 or so.\n\n> Shared Buffers = 375 000 ( 375000 * 8 * 1024 /100 = 3072 MB ) # proposed\n> value is 1/4 the actual memory\n\nReasonable, but don't just blindly use 1/4 memory. For transactional\nloads smaller is often better. For reporting dbs, larger is often\nbetter. Test it to see what happens with your load and varying\namounts of shared_buffers\n\n> Effective Cache Size = 266000 ( 266000 * 8 * 1024 /100 = 2179 MB ) #\n> proposed value is 1/3 memory after OS Allocation\n\nBetter to add the cache / buffer amount of OS and shared_buffers to\nget it. Which would be much higher. Generally it's in the 3/4 of\nmemory on most machines.\n\n> work_mem = 3000 ( 3000 * max connections * 1024 = 3000 * 1800 * 1024 = 5529\n> MB ( this is the working memory for postgres) )\n\nThis is the max work_mem per sort or hash aggregate. Note that if all\nof your maximum backends connected and each did 2 sorts and one hash\naggregate at once, you could use max_connections * 3 * work_mem memory\nat once. Machine swaps til it dies.\n\nAssuming this is 3000 8k blocks that 24Meg which is high but not unreasonable.\n\n\n> max_fsm_pages = 20000 ( This has to be analyzed and can be increased to\n> 40000, this can be done after one or two day observation)\n\nTo see what you need here, log into the postgres database as a\nsuperuser and issue the command:\n\nvacuum verbose;\n\nand see what the last 5 or so lines have to say. They'll look like this:\n\nINFO: free space map contains 339187 pages in 18145 relations\nDETAIL: A total of 623920 page slots are in use (including overhead).\n623920 page slots are required to track all free space.\nCurrent limits are: 10000000 page slots, 500000 relations, using 109582 kB.\n", "msg_date": "Wed, 23 Sep 2009 14:55:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Hi\n\nToday the load observed very high load . I am pasting the top.\n\n*TOP *\ntop - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84, 33.13,\n25.84\nTasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie\nCpu(s): 49.2%us, 0.8%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.1%si,\n50.0%st\nMem: 16133676k total, 14870736k used, 1262940k free, 475484k buffers\nSwap: 14466492k total, 124k used, 14466368k free, 11423616k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n 4152 postgres 17 0 2436m 176m 171m R 16 1.1 0:03.09 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4122 postgres 17 0 2431m 20m 17m R 12 0.1 0:06.38 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4007 postgres 16 0 2434m 80m 75m R 11 0.5 0:26.46 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 3994 postgres 16 0 2432m 134m 132m R 10 0.9 0:43.40 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4166 postgres 16 0 2433m 12m 8896 R 9 0.1 0:02.71 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4110 postgres 15 0 2436m 224m 217m S 8 1.4 0:06.83 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4061 postgres 16 0 2446m 491m 473m R 8 3.1 0:17.32 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4113 postgres 16 0 2432m 68m 65m R 8 0.4 0:11.03 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4071 postgres 16 0 2435m 200m 194m R 7 1.3 0:13.69 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4169 postgres 15 0 2436m 122m 117m R 7 0.8 0:00.93 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4178 postgres 16 0 2432m 77m 75m R 7 0.5 0:00.56 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4108 postgres 16 0 2437m 301m 293m R 6 1.9 0:11.94 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4155 postgres 16 0 2438m 252m 244m S 5 1.6 0:02.80 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4190 postgres 15 0 2432m 10m 8432 R 5 0.1 0:00.71 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 3906 postgres 16 0 2433m 124m 119m R 5 0.8 0:57.28 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 3970 postgres 16 0 2442m 314m 304m R 5 2.0 0:16.43 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4130 postgres 17 0 2433m 76m 72m R 5 0.5 0:03.76 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4179 postgres 16 0 2432m 105m 102m R 5 0.7 0:01.11 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4125 postgres 17 0 2436m 398m 391m R 4 2.5 0:05.62 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4162 postgres 16 0 2432m 125m 122m R 4 0.8 0:01.01 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n\u001b 217m S 8 1.4 0:06.83 postgres: postgres dbEnterpriser_09_10\n192.168.10. dbEnterpriser_09_10 192.168.10.\n 4061 postgres 16 0 2446m 491m 473m R 8 3.1 0:17.32 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4113 postgres 16 0 2432m 68m 65m R 8 0.4 0:11.03 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4071 postgres 16 0 2435m 200m 194m R 7 1.3 0:13.69 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4169 postgres 15 0 2436m 122m 117m R 7 0.8 0:00.93 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4178 postgres 16 0 2432m 77m 75m R 7 0.5 0:00.56 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4108 postgres 16 0 2437m 301m 293m R 6 1.9 0:11.94 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4155 postgres 16 0 2438m 252m 244m S 5 1.6 0:02.80 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4190 postgres 15 0 2432m 10m 8432 R 5 0.1 0:00.71 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 3906 postgres 16 0 2433m 124m 119m R 5 0.8 0:57.28 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 3970 postgres 16 0 2442m 314m 304m R 5 2.0 0:16.43 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4130 postgres 17 0 2433m 76m 72m R 5 0.5 0:03.76 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4179 postgres 16 0 2432m 105m 102m R 5 0.7 0:01.11 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4125 postgres 17 0 2436m 398m 391m R 4 2.5 0:05.62 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4162 postgres 16 0 2432m 125m 122m R 4 0.8 0:01.01 postgres:\npostgres dbEnterpriser_09_10 192.168.10.\n 4185 postgres 1\n\n*OUTPUT OF IOSTAT 1 5 (is SAN becoming a bottleneck,shows 50% CPU usage?) *\n\nclusternode2:~ # iostat 1 5\nLinux 2.6.16.46-0.12-ppc64 (clusternode2) 09/24/2009 _ppc64_ (4\nCPU)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 16.00 0.00 0.68 0.61 10.72 71.99\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 1.08 1.96 22.54 13505448 155494808\nsdb 0.00 0.20 0.45 1410179 3099920\nsdc 0.00 0.05 0.01 357404 78840\nscd0 0.00 0.00 0.00 136 0\nsdd 12.20 77.69 343.49 535925176 2369551848\nsde 0.00 0.00 0.00 1120 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 29.46 0.00 0.25 0.00 7.43 62.87\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\nsdc 0.00 0.00 0.00 0 0\nscd0 0.00 0.00 0.00 0 0\nsdd 0.00 0.00 0.00 0 0\nsde 0.00 0.00 0.00 0 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 46.17 0.00 0.99 0.00 38.52 14.32\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\nsdc 0.00 0.00 0.00 0 0\nscd0 0.00 0.00 0.00 0 0\nsdd 3.96 0.00 118.81 0 120\nsde 0.00 0.00 0.00 0 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 48.88 0.00 0.99 0.00 49.88 0.25\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\nsdc 0.00 0.00 0.00 0 0\nscd0 0.00 0.00 0.00 0 0\nsdd 0.00 0.00 0.00 0 0\nsde 0.00 0.00 0.00 0 0\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 47.86 0.00 2.14 0.00 50.00 0.00\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\nsdc 0.00 0.00 0.00 0 0\nscd0 0.00 0.00 0.00 0 0\nsdd 0.00 0.00 0.00 0 0\nsde 0.00 0.00 0.00 0 0\n\n\n\n\n\nAndy Colson Wrote : ,\n*Eww. I think that's bad. A connection that has a transaction open will\ncause lots of row versions, which use up ram, and make it slower to step\nthrough the table (even with an index). You really need to fix up your code\nand make sure you commit transactions. (any statement (select, insert,\nupdate) will start a new transaction that you need to explicitly commit).\n\n*With reference to this suggestion by Andy Colson, we checked the\napplication code and found that onlyINSERT, UPDATE has COMMIT and SELECT\nhas no commit, We are using a lot of \"Ajax Suggest\" in the all the forms\naccessed for fetching the data using SELECT statements which are not\nexplicitly commited. We have started updating the code on this.\n\nThanks for this suggestion.\n\n\nAgain thanks to suggestion of Scott Marlowe in reducing the number of\nconnections. This was now reducted to 500 .\n\n\nAs i mentioned in the mail, i am restarting the database every 30 minutes. I\nfound a shell script in the wiki which could the idle in transaction pids.\nThis is the code. The code will kill all old pids in the server.\n\nThis is the script\n\n/usr/bin/test `/usr/bin/pgrep -f 'idle in transaction' | \\\n\n\n /usr/bin/wc -l ` -gt 20 && /usr/bin/pkill -o -f 'idle in transaction'\n\nand this is the link where the script was provided.\n\nhttp://wiki.dspace.org/index.php/Idle_In_Transaction_Problem\n\nI tried it run it as test in the server, but the script is not executing.\nEven i see many of the \"Idle in transaction \" PIDs are showing R (RUnning\nstatus) , but most of them are showing S(Sleep ) status. Please suggest\nanyway i can resolve this idle transaction issue.\n\nRegards\n\nShiva Raman\n\nHi Today the load observed very high load . I am pasting the top. TOP top - 12:45:23 up 79 days, 14:42,  1 user,  load average: 45.84, 33.13, 25.84Tasks: 394 total,  48 running, 346 sleeping,   0 stopped,   0 zombie\nCpu(s): 49.2%us,  0.8%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.1%si, 50.0%stMem:  16133676k total, 14870736k used,  1262940k free,   475484k buffersSwap: 14466492k total,      124k used, 14466368k free, 11423616k cached\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                            4152 postgres  17   0 2436m 176m 171m R   16  1.1   0:03.09 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4122 postgres  17   0 2431m  20m  17m R   12  0.1   0:06.38 postgres: postgres dbEnterpriser_09_10 192.168.10. 4007 postgres  16   0 2434m  80m  75m R   11  0.5   0:26.46 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 3994 postgres  16   0 2432m 134m 132m R   10  0.9   0:43.40 postgres: postgres dbEnterpriser_09_10 192.168.10. 4166 postgres  16   0 2433m  12m 8896 R    9  0.1   0:02.71 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4110 postgres  15   0 2436m 224m 217m S    8  1.4   0:06.83 postgres: postgres dbEnterpriser_09_10 192.168.10. 4061 postgres  16   0 2446m 491m 473m R    8  3.1   0:17.32 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4113 postgres  16   0 2432m  68m  65m R    8  0.4   0:11.03 postgres: postgres dbEnterpriser_09_10 192.168.10. 4071 postgres  16   0 2435m 200m 194m R    7  1.3   0:13.69 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4169 postgres  15   0 2436m 122m 117m R    7  0.8   0:00.93 postgres: postgres dbEnterpriser_09_10 192.168.10. 4178 postgres  16   0 2432m  77m  75m R    7  0.5   0:00.56 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4108 postgres  16   0 2437m 301m 293m R    6  1.9   0:11.94 postgres: postgres dbEnterpriser_09_10 192.168.10. 4155 postgres  16   0 2438m 252m 244m S    5  1.6   0:02.80 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4190 postgres  15   0 2432m  10m 8432 R    5  0.1   0:00.71 postgres: postgres dbEnterpriser_09_10 192.168.10. 3906 postgres  16   0 2433m 124m 119m R    5  0.8   0:57.28 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 3970 postgres  16   0 2442m 314m 304m R    5  2.0   0:16.43 postgres: postgres dbEnterpriser_09_10 192.168.10. 4130 postgres  17   0 2433m  76m  72m R    5  0.5   0:03.76 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4179 postgres  16   0 2432m 105m 102m R    5  0.7   0:01.11 postgres: postgres dbEnterpriser_09_10 192.168.10. 4125 postgres  17   0 2436m 398m 391m R    4  2.5   0:05.62 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4162 postgres  16   0 2432m 125m 122m R    4  0.8   0:01.01 postgres: postgres dbEnterpriser_09_10 192.168.10. 217m S    8  1.4   0:06.83 postgres: postgres dbEnterpriser_09_10 192.168.10. dbEnterpriser_09_10 192.168.10.\n 4061 postgres  16   0 2446m 491m 473m R    8  3.1   0:17.32 postgres: postgres dbEnterpriser_09_10 192.168.10. 4113 postgres  16   0 2432m  68m  65m R    8  0.4   0:11.03 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4071 postgres  16   0 2435m 200m 194m R    7  1.3   0:13.69 postgres: postgres dbEnterpriser_09_10 192.168.10. 4169 postgres  15   0 2436m 122m 117m R    7  0.8   0:00.93 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4178 postgres  16   0 2432m  77m  75m R    7  0.5   0:00.56 postgres: postgres dbEnterpriser_09_10 192.168.10. 4108 postgres  16   0 2437m 301m 293m R    6  1.9   0:11.94 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4155 postgres  16   0 2438m 252m 244m S    5  1.6   0:02.80 postgres: postgres dbEnterpriser_09_10 192.168.10. 4190 postgres  15   0 2432m  10m 8432 R    5  0.1   0:00.71 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 3906 postgres  16   0 2433m 124m 119m R    5  0.8   0:57.28 postgres: postgres dbEnterpriser_09_10 192.168.10. 3970 postgres  16   0 2442m 314m 304m R    5  2.0   0:16.43 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4130 postgres  17   0 2433m  76m  72m R    5  0.5   0:03.76 postgres: postgres dbEnterpriser_09_10 192.168.10. 4179 postgres  16   0 2432m 105m 102m R    5  0.7   0:01.11 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4125 postgres  17   0 2436m 398m 391m R    4  2.5   0:05.62 postgres: postgres dbEnterpriser_09_10 192.168.10. 4162 postgres  16   0 2432m 125m 122m R    4  0.8   0:01.01 postgres: postgres dbEnterpriser_09_10 192.168.10.\n 4185 postgres  1OUTPUT OF IOSTAT 1 5 (is SAN becoming a bottleneck,shows 50% CPU usage?) clusternode2:~ # iostat 1 5Linux 2.6.16.46-0.12-ppc64 (clusternode2)       09/24/2009      _ppc64_ (4 CPU)\navg-cpu:  %user   %nice %system %iowait  %steal   %idle          16.00    0.00    0.68    0.61   10.72   71.99Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               1.08         1.96        22.54   13505448  155494808\nsdb               0.00         0.20         0.45    1410179    3099920sdc               0.00         0.05         0.01     357404      78840scd0              0.00         0.00         0.00        136          0\nsdd              12.20        77.69       343.49  535925176 2369551848sde               0.00         0.00         0.00       1120          0avg-cpu:  %user   %nice %system %iowait  %steal   %idle          29.46    0.00    0.25    0.00    7.43   62.87\nDevice:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               0.00         0.00         0.00          0          0sdb               0.00         0.00         0.00          0          0\nsdc               0.00         0.00         0.00          0          0scd0              0.00         0.00         0.00          0          0sdd               0.00         0.00         0.00          0          0\nsde               0.00         0.00         0.00          0          0avg-cpu:  %user   %nice %system %iowait  %steal   %idle          46.17    0.00    0.99    0.00   38.52   14.32Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn\nsda               0.00         0.00         0.00          0          0sdb               0.00         0.00         0.00          0          0sdc               0.00         0.00         0.00          0          0\nscd0              0.00         0.00         0.00          0          0sdd               3.96         0.00       118.81          0        120sde               0.00         0.00         0.00          0          0\navg-cpu:  %user   %nice %system %iowait  %steal   %idle          48.88    0.00    0.99    0.00   49.88    0.25Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               0.00         0.00         0.00          0          0\nsdb               0.00         0.00         0.00          0          0sdc               0.00         0.00         0.00          0          0scd0              0.00         0.00         0.00          0          0\nsdd               0.00         0.00         0.00          0          0sde               0.00         0.00         0.00          0          0avg-cpu:  %user   %nice %system %iowait  %steal   %idle          47.86    0.00    2.14    0.00   50.00    0.00\nDevice:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               0.00         0.00         0.00          0          0sdb               0.00         0.00         0.00          0          0\nsdc               0.00         0.00         0.00          0          0scd0              0.00         0.00         0.00          0          0sdd               0.00         0.00         0.00          0          0\nsde               0.00         0.00         0.00          0          0Andy Colson Wrote :  , Eww.  I think that's bad.  A connection that has a transaction open\nwill cause lots of row versions, which use up ram, and make it slower\nto step through the table (even with an index).  You really need to fix\nup your code and make sure you commit transactions.  (any statement\n(select, insert, update) will start a new transaction that you need to\nexplicitly commit).With reference to this suggestion by Andy Colson, we checked the application code and found that onlyINSERT, UPDATE  has COMMIT  and SELECT has no commit, We are using a lot of  \"Ajax Suggest\" in the all the forms accessed for fetching the data using SELECT statements which are not explicitly commited. We have started updating the code on this. \nThanks for this  suggestion. Again thanks to suggestion of Scott Marlowe in reducing the number of connections. This was now reducted to 500 .As i mentioned in the mail, i am restarting the database every 30 minutes. I found a shell script in the wiki which could the idle in transaction pids. This is the code. The code will kill all old pids in the server.\nThis is the script /usr/bin/test `/usr/bin/pgrep -f 'idle in transaction' | \\ /usr/bin/wc -l ` -gt 20 && /usr/bin/pkill -o -f 'idle in transaction'and this is the link where the script was provided.\nhttp://wiki.dspace.org/index.php/Idle_In_Transaction_ProblemI tried it run it as test in the server, but the script is not\nexecuting. Even i see many of the \"Idle in transaction \" PIDs are\nshowing R (RUnning status) , but most of them are showing S(Sleep )\nstatus. Please suggest anyway i can resolve this idle transaction issue. \nRegardsShiva Raman", "msg_date": "Thu, 24 Sep 2009 18:20:29 +0530", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "For 'idle in transaction' issues, you have to fix your code. I faced this\nissue couple of months back. How good is your exception handling? Are you\nrollingback/comitting your transactions while exceptions are thrown, during\nthe course of db operations?\n\nHonestly I wouldn't go for these scripts which kill processes.\n\n\nOn Thu, Sep 24, 2009 at 6:20 PM, Shiva Raman <[email protected]> wrote:\n\n> Hi\n>\n> Today the load observed very high load . I am pasting the top.\n>\n> *TOP *\n> top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84, 33.13,\n> 25.84\n> Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 49.2%us, 0.8%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.1%si,\n> 50.0%st\n> Mem: 16133676k total, 14870736k used, 1262940k free, 475484k buffers\n> Swap: 14466492k total, 124k used, 14466368k free, 11423616k cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n> 4152 postgres 17 0 2436m 176m 171m R 16 1.1 0:03.09 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4122 postgres 17 0 2431m 20m 17m R 12 0.1 0:06.38 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4007 postgres 16 0 2434m 80m 75m R 11 0.5 0:26.46 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 3994 postgres 16 0 2432m 134m 132m R 10 0.9 0:43.40 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4166 postgres 16 0 2433m 12m 8896 R 9 0.1 0:02.71 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4110 postgres 15 0 2436m 224m 217m S 8 1.4 0:06.83 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4061 postgres 16 0 2446m 491m 473m R 8 3.1 0:17.32 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4113 postgres 16 0 2432m 68m 65m R 8 0.4 0:11.03 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4071 postgres 16 0 2435m 200m 194m R 7 1.3 0:13.69 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4169 postgres 15 0 2436m 122m 117m R 7 0.8 0:00.93 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4178 postgres 16 0 2432m 77m 75m R 7 0.5 0:00.56 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4108 postgres 16 0 2437m 301m 293m R 6 1.9 0:11.94 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4155 postgres 16 0 2438m 252m 244m S 5 1.6 0:02.80 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4190 postgres 15 0 2432m 10m 8432 R 5 0.1 0:00.71 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 3906 postgres 16 0 2433m 124m 119m R 5 0.8 0:57.28 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 3970 postgres 16 0 2442m 314m 304m R 5 2.0 0:16.43 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4130 postgres 17 0 2433m 76m 72m R 5 0.5 0:03.76 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4179 postgres 16 0 2432m 105m 102m R 5 0.7 0:01.11 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4125 postgres 17 0 2436m 398m 391m R 4 2.5 0:05.62 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4162 postgres 16 0 2432m 125m 122m R 4 0.8 0:01.01 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 217m S 8 1.4 0:06.83 postgres: postgres dbEnterpriser_09_10\n> 192.168.10. dbEnterpriser_09_10 192.168.10.\n> 4061 postgres 16 0 2446m 491m 473m R 8 3.1 0:17.32 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4113 postgres 16 0 2432m 68m 65m R 8 0.4 0:11.03 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4071 postgres 16 0 2435m 200m 194m R 7 1.3 0:13.69 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4169 postgres 15 0 2436m 122m 117m R 7 0.8 0:00.93 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4178 postgres 16 0 2432m 77m 75m R 7 0.5 0:00.56 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4108 postgres 16 0 2437m 301m 293m R 6 1.9 0:11.94 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4155 postgres 16 0 2438m 252m 244m S 5 1.6 0:02.80 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4190 postgres 15 0 2432m 10m 8432 R 5 0.1 0:00.71 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 3906 postgres 16 0 2433m 124m 119m R 5 0.8 0:57.28 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 3970 postgres 16 0 2442m 314m 304m R 5 2.0 0:16.43 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4130 postgres 17 0 2433m 76m 72m R 5 0.5 0:03.76 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4179 postgres 16 0 2432m 105m 102m R 5 0.7 0:01.11 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4125 postgres 17 0 2436m 398m 391m R 4 2.5 0:05.62 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4162 postgres 16 0 2432m 125m 122m R 4 0.8 0:01.01 postgres:\n> postgres dbEnterpriser_09_10 192.168.10.\n> 4185 postgres 1\n>\n> *OUTPUT OF IOSTAT 1 5 (is SAN becoming a bottleneck,shows 50% CPU usage?)\n> *\n>\n> clusternode2:~ # iostat 1 5\n> Linux 2.6.16.46-0.12-ppc64 (clusternode2) 09/24/2009 _ppc64_ (4\n> CPU)\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 16.00 0.00 0.68 0.61 10.72 71.99\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 1.08 1.96 22.54 13505448 155494808\n> sdb 0.00 0.20 0.45 1410179 3099920\n> sdc 0.00 0.05 0.01 357404 78840\n> scd0 0.00 0.00 0.00 136 0\n> sdd 12.20 77.69 343.49 535925176 2369551848\n> sde 0.00 0.00 0.00 1120 0\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 29.46 0.00 0.25 0.00 7.43 62.87\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> scd0 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 46.17 0.00 0.99 0.00 38.52 14.32\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> scd0 0.00 0.00 0.00 0 0\n> sdd 3.96 0.00 118.81 0 120\n> sde 0.00 0.00 0.00 0 0\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 48.88 0.00 0.99 0.00 49.88 0.25\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> scd0 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 47.86 0.00 2.14 0.00 50.00 0.00\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> scd0 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n>\n>\n>\n>\n>\n> Andy Colson Wrote : ,\n> *Eww. I think that's bad. A connection that has a transaction open will\n> cause lots of row versions, which use up ram, and make it slower to step\n> through the table (even with an index). You really need to fix up your code\n> and make sure you commit transactions. (any statement (select, insert,\n> update) will start a new transaction that you need to explicitly commit).\n>\n> *\n> With reference to this suggestion by Andy Colson, we checked the\n> application code and found that onlyINSERT, UPDATE has COMMIT and SELECT\n> has no commit, We are using a lot of \"Ajax Suggest\" in the all the forms\n> accessed for fetching the data using SELECT statements which are not\n> explicitly commited. We have started updating the code on this.\n>\n> Thanks for this suggestion.\n>\n>\n> Again thanks to suggestion of Scott Marlowe in reducing the number of\n> connections. This was now reducted to 500 .\n>\n>\n> As i mentioned in the mail, i am restarting the database every 30 minutes.\n> I found a shell script in the wiki which could the idle in transaction pids.\n> This is the code. The code will kill all old pids in the server.\n>\n> This is the script\n>\n> /usr/bin/test `/usr/bin/pgrep -f 'idle in transaction' | \\\n>\n>\n> /usr/bin/wc -l ` -gt 20 && /usr/bin/pkill -o -f 'idle in transaction'\n>\n> and this is the link where the script was provided.\n>\n> http://wiki.dspace.org/index.php/Idle_In_Transaction_Problem\n>\n> I tried it run it as test in the server, but the script is not executing.\n> Even i see many of the \"Idle in transaction \" PIDs are showing R (RUnning\n> status) , but most of them are showing S(Sleep ) status. Please suggest\n> anyway i can resolve this idle transaction issue.\n>\n> Regards\n>\n> Shiva Raman\n>\n>\n>\n\nFor 'idle in transaction' issues, you have to fix your code.  I faced this issue couple of months back.  How good is your exception handling?  Are you rollingback/comitting your transactions while exceptions are thrown, during the course of db operations?\nHonestly I wouldn't go for these scripts which kill processes.On Thu, Sep 24, 2009 at 6:20 PM, Shiva Raman <[email protected]> wrote:\nHi Today the load observed very high load . I am pasting the top. TOP \ntop - 12:45:23 up 79 days, 14:42,  1 user,  load average: 45.84, 33.13, 25.84Tasks: 394 total,  48 running, 346 sleeping,   0 stopped,   0 zombie\nCpu(s): 49.2%us,  0.8%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.1%si, 50.0%stMem:  16133676k total, 14870736k used,  1262940k free,   475484k buffersSwap: 14466492k total,      124k used, 14466368k free, 11423616k cached\n\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                            4152 postgres  17   0 2436m 176m 171m R   16  1.1   0:03.09 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4122 postgres  17   0 2431m  20m  17m R   12  0.1   0:06.38 postgres: postgres dbEnterpriser_09_10 192.168.10. 4007 postgres  16   0 2434m  80m  75m R   11  0.5   0:26.46 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 3994 postgres  16   0 2432m 134m 132m R   10  0.9   0:43.40 postgres: postgres dbEnterpriser_09_10 192.168.10. 4166 postgres  16   0 2433m  12m 8896 R    9  0.1   0:02.71 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4110 postgres  15   0 2436m 224m 217m S    8  1.4   0:06.83 postgres: postgres dbEnterpriser_09_10 192.168.10. 4061 postgres  16   0 2446m 491m 473m R    8  3.1   0:17.32 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4113 postgres  16   0 2432m  68m  65m R    8  0.4   0:11.03 postgres: postgres dbEnterpriser_09_10 192.168.10. 4071 postgres  16   0 2435m 200m 194m R    7  1.3   0:13.69 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4169 postgres  15   0 2436m 122m 117m R    7  0.8   0:00.93 postgres: postgres dbEnterpriser_09_10 192.168.10. 4178 postgres  16   0 2432m  77m  75m R    7  0.5   0:00.56 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4108 postgres  16   0 2437m 301m 293m R    6  1.9   0:11.94 postgres: postgres dbEnterpriser_09_10 192.168.10. 4155 postgres  16   0 2438m 252m 244m S    5  1.6   0:02.80 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4190 postgres  15   0 2432m  10m 8432 R    5  0.1   0:00.71 postgres: postgres dbEnterpriser_09_10 192.168.10. 3906 postgres  16   0 2433m 124m 119m R    5  0.8   0:57.28 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 3970 postgres  16   0 2442m 314m 304m R    5  2.0   0:16.43 postgres: postgres dbEnterpriser_09_10 192.168.10. 4130 postgres  17   0 2433m  76m  72m R    5  0.5   0:03.76 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4179 postgres  16   0 2432m 105m 102m R    5  0.7   0:01.11 postgres: postgres dbEnterpriser_09_10 192.168.10. 4125 postgres  17   0 2436m 398m 391m R    4  2.5   0:05.62 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4162 postgres  16   0 2432m 125m 122m R    4  0.8   0:01.01 postgres: postgres dbEnterpriser_09_10 192.168.10. 217m S    8  1.4   0:06.83 postgres: postgres dbEnterpriser_09_10 192.168.10. dbEnterpriser_09_10 192.168.10.\n\n 4061 postgres  16   0 2446m 491m 473m R    8  3.1   0:17.32 postgres: postgres dbEnterpriser_09_10 192.168.10. 4113 postgres  16   0 2432m  68m  65m R    8  0.4   0:11.03 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4071 postgres  16   0 2435m 200m 194m R    7  1.3   0:13.69 postgres: postgres dbEnterpriser_09_10 192.168.10. 4169 postgres  15   0 2436m 122m 117m R    7  0.8   0:00.93 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4178 postgres  16   0 2432m  77m  75m R    7  0.5   0:00.56 postgres: postgres dbEnterpriser_09_10 192.168.10. 4108 postgres  16   0 2437m 301m 293m R    6  1.9   0:11.94 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4155 postgres  16   0 2438m 252m 244m S    5  1.6   0:02.80 postgres: postgres dbEnterpriser_09_10 192.168.10. 4190 postgres  15   0 2432m  10m 8432 R    5  0.1   0:00.71 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 3906 postgres  16   0 2433m 124m 119m R    5  0.8   0:57.28 postgres: postgres dbEnterpriser_09_10 192.168.10. 3970 postgres  16   0 2442m 314m 304m R    5  2.0   0:16.43 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4130 postgres  17   0 2433m  76m  72m R    5  0.5   0:03.76 postgres: postgres dbEnterpriser_09_10 192.168.10. 4179 postgres  16   0 2432m 105m 102m R    5  0.7   0:01.11 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4125 postgres  17   0 2436m 398m 391m R    4  2.5   0:05.62 postgres: postgres dbEnterpriser_09_10 192.168.10. 4162 postgres  16   0 2432m 125m 122m R    4  0.8   0:01.01 postgres: postgres dbEnterpriser_09_10 192.168.10.\n\n 4185 postgres  1OUTPUT OF IOSTAT 1 5 (is SAN becoming a bottleneck,shows 50% CPU usage?) clusternode2:~ # iostat 1 5Linux 2.6.16.46-0.12-ppc64 (clusternode2)       09/24/2009      _ppc64_ (4 CPU)\navg-cpu:  %user   %nice %system %iowait  %steal   %idle          16.00    0.00    0.68    0.61   10.72   71.99Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               1.08         1.96        22.54   13505448  155494808\n\nsdb               0.00         0.20         0.45    1410179    3099920sdc               0.00         0.05         0.01     357404      78840scd0              0.00         0.00         0.00        136          0\n\nsdd              12.20        77.69       343.49  535925176 2369551848sde               0.00         0.00         0.00       1120          0avg-cpu:  %user   %nice %system %iowait  %steal   %idle          29.46    0.00    0.25    0.00    7.43   62.87\nDevice:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               0.00         0.00         0.00          0          0sdb               0.00         0.00         0.00          0          0\n\nsdc               0.00         0.00         0.00          0          0scd0              0.00         0.00         0.00          0          0sdd               0.00         0.00         0.00          0          0\n\nsde               0.00         0.00         0.00          0          0avg-cpu:  %user   %nice %system %iowait  %steal   %idle          46.17    0.00    0.99    0.00   38.52   14.32Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn\n\nsda               0.00         0.00         0.00          0          0sdb               0.00         0.00         0.00          0          0sdc               0.00         0.00         0.00          0          0\n\nscd0              0.00         0.00         0.00          0          0sdd               3.96         0.00       118.81          0        120sde               0.00         0.00         0.00          0          0\navg-cpu:  %user   %nice %system %iowait  %steal   %idle          48.88    0.00    0.99    0.00   49.88    0.25Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               0.00         0.00         0.00          0          0\n\nsdb               0.00         0.00         0.00          0          0sdc               0.00         0.00         0.00          0          0scd0              0.00         0.00         0.00          0          0\n\nsdd               0.00         0.00         0.00          0          0sde               0.00         0.00         0.00          0          0avg-cpu:  %user   %nice %system %iowait  %steal   %idle          47.86    0.00    2.14    0.00   50.00    0.00\nDevice:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtnsda               0.00         0.00         0.00          0          0sdb               0.00         0.00         0.00          0          0\n\nsdc               0.00         0.00         0.00          0          0scd0              0.00         0.00         0.00          0          0sdd               0.00         0.00         0.00          0          0\n\nsde               0.00         0.00         0.00          0          0Andy Colson Wrote :  , Eww.  I think that's bad.  A connection that has a transaction open\nwill cause lots of row versions, which use up ram, and make it slower\nto step through the table (even with an index).  You really need to fix\nup your code and make sure you commit transactions.  (any statement\n(select, insert, update) will start a new transaction that you need to\nexplicitly commit).With reference to this suggestion by Andy Colson, we checked the application code and found that onlyINSERT, UPDATE  has COMMIT  and SELECT has no commit, We are using a lot of  \"Ajax Suggest\" in the all the forms accessed for fetching the data using SELECT statements which are not explicitly commited. We have started updating the code on this. \nThanks for this  suggestion. Again thanks to suggestion of Scott Marlowe in reducing the number of connections. This was now reducted to 500 .As i mentioned in the mail, i am restarting the database every 30 minutes. I found a shell script in the wiki which could the idle in transaction pids. This is the code. The code will kill all old pids in the server.\nThis is the script /usr/bin/test `/usr/bin/pgrep -f 'idle in transaction' | \\ /usr/bin/wc -l ` -gt 20 && /usr/bin/pkill -o -f 'idle in transaction'and this is the link where the script was provided.\nhttp://wiki.dspace.org/index.php/Idle_In_Transaction_ProblemI tried it run it as test in the server, but the script is not\nexecuting. Even i see many of the \"Idle in transaction \" PIDs are\nshowing R (RUnning status) , but most of them are showing S(Sleep )\nstatus. Please suggest anyway i can resolve this idle transaction issue. \nRegardsShiva Raman", "msg_date": "Thu, 24 Sep 2009 19:06:01 +0530", "msg_from": "Praveen DS <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Shiva Raman wrote:\n> Hi\n> \n> Today the load observed very high load . I am pasting the top.\n> \n> _*TOP *_\n> top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84, 33.13, \n> 25.84\n> Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 49.2%us, 0.8%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.1%si, \n> 50.0%st\n> Mem: 16133676k total, 14870736k used, 1262940k free, 475484k buffers\n> Swap: 14466492k total, 124k used, 14466368k free, 11423616k cached\n> \n> \n> _*OUTPUT OF IOSTAT 1 5 (is SAN becoming a bottleneck,shows 50% CPU \n> usage?) *_\n> \n> clusternode2:~ # iostat 1 5\n> Linux 2.6.16.46-0.12-ppc64 (clusternode2) 09/24/2009 _ppc64_ \n> (4 CPU)\n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 16.00 0.00 0.68 0.61 10.72 71.99\n> \n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 29.46 0.00 0.25 0.00 7.43 62.87\n> \n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 46.17 0.00 0.99 0.00 38.52 14.32\n> \n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 48.88 0.00 0.99 0.00 49.88 0.25\n> \n> \n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 47.86 0.00 2.14 0.00 50.00 0.00\n> \n\nBoth top and iostat show no wait time for io. However, they both show \nwait time on the vm. You have 50% user and 50% steal, and zero% io.\n\nyou said: \"SAN becoming a bottleneck,shows 50% CPU usage?\"\n\nI'm not sure what you are looking at. SAN is like HD right? I assume \nwaiting on the SAN would show up as %iowait... yes?\n\n\n> \n> Andy Colson Wrote : ,\n> /Eww. I think that's bad. A connection that has a transaction open \n> will cause lots of row versions, which use up ram, and make it slower to \n> step through the table (even with an index). You really need to fix up \n> your code and make sure you commit transactions. (any statement \n> (select, insert, update) will start a new transaction that you need to \n> explicitly commit).\n> \n> /With reference to this suggestion by Andy Colson, we checked the \n> application code and found that onlyINSERT, UPDATE has COMMIT and \n> SELECT has no commit, We are using a lot of \"Ajax Suggest\" in the all \n> the forms accessed for fetching the data using SELECT statements which \n> are not explicitly commited. We have started updating the code on this.\n> \n> Thanks for this suggestion.\n> \n> \n> Again thanks to suggestion of Scott Marlowe in reducing the number of \n> connections. This was now reducted to 500 .\n> \n> \n> As i mentioned in the mail, i am restarting the database every 30 \n> minutes. I found a shell script in the wiki which could the idle in \n> transaction pids. This is the code. The code will kill all old pids in \n> the server.\n> \n> This is the script\n> \n> /usr/bin/test `/usr/bin/pgrep -f 'idle in transaction' | \\\n> \n> \n> /usr/bin/wc -l ` -gt 20 && /usr/bin/pkill -o -f 'idle in transaction'\n> \n> and this is the link where the script was provided.\n> \n> http://wiki.dspace.org/index.php/Idle_In_Transaction_Problem\n> \n> I tried it run it as test in the server, but the script is not \n> executing. Even i see many of the \"Idle in transaction \" PIDs are \n> showing R (RUnning status) , but most of them are showing S(Sleep ) \n> status. Please suggest anyway i can resolve this idle transaction issue.\n\nfixing up the code to commit selects will make the \"idle in trans.\" go \naway. I'm with Praveen, fix the code, avoid the scripts.\n\nIs there anything else running on this box? You said previously \"The \nPowerPC cpu is having some virtual layer that is shown in the Steal \nvalue.\". I'm not sure what that means. Are you in a virtual machine? \nOr running other vm's? Based on the top you posted (this one and the \nvery first one) you are loosing half your cpu to the vm. (unless I'm \ntotally reading this wrong... I don't have experience with vm's so \nplease someone jump in here and correct me if I'm wrong)\n\n\n-Andy\n", "msg_date": "Thu, 24 Sep 2009 10:32:21 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Andy Colson wrote:\n> Shiva Raman wrote:\n>> Hi\n>>\n>> Today the load observed very high load . I am pasting the top.\n>>\n>> _*TOP *_\n>> top - 12:45:23 up 79 days, 14:42, 1 user, load average: 45.84,\n>> 33.13, 25.84\n>> Tasks: 394 total, 48 running, 346 sleeping, 0 stopped, 0 zombie\n>> Cpu(s): 49.2%us, 0.8%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, \n>> 0.1%si, 50.0%st\n>> Mem: 16133676k total, 14870736k used, 1262940k free, 475484k buffers\n>> Swap: 14466492k total, 124k used, 14466368k free, 11423616k cached\n>>\n>> /usr/bin/wc -l ` -gt 20 && /usr/bin/pkill -o -f 'idle in transaction'\n>>\n>> and this is the link where the script was provided.\n>>\n>> http://wiki.dspace.org/index.php/Idle_In_Transaction_Problem\n>>\n>> I tried it run it as test in the server, but the script is not\n>> executing. Even i see many of the \"Idle in transaction \" PIDs are\n>> showing R (RUnning status) , but most of them are showing S(Sleep )\n>> status. Please suggest anyway i can resolve this idle transaction issue.\n>\n> fixing up the code to commit selects will make the \"idle in trans.\" go\n> away. I'm with Praveen, fix the code, avoid the scripts.\n>\n> Is there anything else running on this box? You said previously \"The\n> PowerPC cpu is having some virtual layer that is shown in the Steal\n> value.\". I'm not sure what that means. Are you in a virtual machine?\n> Or running other vm's? Based on the top you posted (this one and the\n> very first one) you are loosing half your cpu to the vm. (unless I'm\n> totally reading this wrong... I don't have experience with vm's so\n> please someone jump in here and correct me if I'm wrong)\n>\n\"idle in transaction\" processes will DESTROY throughput over time.\n\nDon't kill them - find out how they're happening. They should NOT happen.\n\nIf you take an exception in an application it is essential that the\napplication NOT leave pending transactions open. If your middleware\nbetween application and Postgres doesn't take care of this cleanup on\nexit on its own (or if it would if you left through an \"approved\" path\nbut you're doing something like SEGVing out of a compiled app or calling\nexit() without closing open connections, etc) you need to figure out\nwhere you're getting these exceptions from and fix them.\n\nHacks like killing \"idle in transaction\" processes will eventually bite\nyou by killing a process that is TEMPORARILY idle while waiting for some\nresource but the check \"catches it\" at exactly the wrong time, whacking\na perfectly good change. At best this returns an error to the user; at\nworst, especially in a web-based application, it can result in a\nsilently-lost transaction.\n\n-- Karl", "msg_date": "Thu, 24 Sep 2009 10:48:34 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": ">From: Shiva Raman\n>Subject: Re: [PERFORM] High CPU load on Postgres Server during Peak\ntimes!!!!\n>\n>Andy Colson Wrote : , \n>>Eww. I think that's bad. A connection that has a transaction open will\ncause lots of row versions, \n>>which use up ram, and make it slower to step through the table (even with\nan index). You really need \n>>to fix up your code and make sure you commit transactions. (any statement\n(select, insert, update) will \n>>start a new transaction that you need to explicitly commit).\n>\n>With reference to this suggestion by Andy Colson, we checked the\napplication code and found that only\n>INSERT, UPDATE has COMMIT and SELECT has no commit, We are using a lot of\n\"Ajax Suggest\" in the all \n>the forms accessed for fetching the data using SELECT statements which are\nnot explicitly committed. \n>We have started updating the code on this. \n\nYou need a COMMIT for every BEGIN. If you just run a SELECT statement\nwithout first beginning a transaction, then you should not end up with a\nconnection that is Idle in Transaction. If you are beginning a transaction,\ndoing a select, and then not committing, then yes that is a bug.\n\nDave\n\n\n\t\n\n", "msg_date": "Thu, 24 Sep 2009 11:08:15 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Dave Dutcher wrote:\n>> From: Shiva Raman\n>> Subject: Re: [PERFORM] High CPU load on Postgres Server during Peak\n>> \n> not explicitly committed. \n> \n>> We have started updating the code on this. \n>> \n>\n> You need a COMMIT for every BEGIN. If you just run a SELECT statement\n> without first beginning a transaction, then you should not end up with a\n> connection that is Idle in Transaction. If you are beginning a transaction,\n> doing a select, and then not committing, then yes that is a bug.\n>\n> Dave\n>\n> \nDave is correct. A SELECT without a BEGIN in front of it will not begin\na transaction. Atomic SELECTs (that is, those not intended to return\nrows that will then be updated or deleted, etc.) does not need and\nshould NOT have a BEGIN in front of it.\n\nAny block of statements that must act in an atomic fashion must have a\nBEGIN/COMMIT or BEGIN/ROLLBACK block around them to guarantee atomic\nresults across statements; any time you issue a BEGIN you MUST issue\neither a ROLLBACK or COMMIT. Exiting SOUNDS safe (and if the connection\nis truly dropped it is as that will implicitly roll back any uncommitted\ntransaction) BUT in a pooled connection environment it leads to exactly\nwhat you're seeing here.\n\nIt is a serious mistake to leave open transactions active in a session\nas that leaves multiple copies of rows and the support data necessary to\nhandle them either in memory, on disk or both. When the working set of\nall postgresql instances reaches the physical memory limit and the\nsystem starts to page performance will go straight in the toilet.\n\n-- Karl", "msg_date": "Thu, 24 Sep 2009 12:55:50 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Dave Dutcher wrote:\n> You need a COMMIT for every BEGIN. If you just run a SELECT statement\n> without first beginning a transaction, then you should not end up with a\n> connection that is Idle in Transaction. If you are beginning a transaction,\n> doing a select, and then not committing, then yes that is a bug.\n\nThe BEGIN can be hidden, though. For example, if the application is written in Perl,\n\n $dbh = DBI->connect($dsn, $user, $pass, {AutoCommit => 0});\n\nwill automatically start a transaction the first time you do anything. Under the covers, the Perl DBI issues the BEGIN for you, and you have to do an explicit\n\n $dbh->commit();\n\nto commit it.\n\nCraig\n\n\n", "msg_date": "Thu, 24 Sep 2009 11:45:28 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Hi Gerhard\n\n Thanks for the mail\n\nOn Thu, Sep 24, 2009 at 7:19 PM, Gerhard Wiesinger <[email protected]>wrote:\n\n> Hello Shiva,\n>\n> What I see from top (0.0%wa) you don't have any I/O problem but a major CPU\n> problem. But this is contrast to iostat where up to 50% of iowait is there\n> (sometimes).\n>\n> I think you have 2 problems:\n> 1.) Client applications which don't close the connection. If the\n> applications wants persistent connections (for performance reasons), then\n> idle postgresql processes are ok. A better approach would be some kind of\n> connection pool. What programming language do you use on the web tier?\n>\n\nI am using connection pooling on Tomcat Web Server . Total of 500\nconnections are configured to be handled in the connection pool.\n\n\n\n> 2.) Find out queries which produce the high CPU load. (e.g. pg_top). I\n> guess there are some very suboptimal queries there. (I guess some indexes\n> are missing).\n>\nYou could e.g. set\n> log_min_duration_statement = 50 # 50ms, all slower queries are logged\n>\n> I enabled the min duration statement and i found that allmost ninety\npercent of queries are logged which has duration more thatn 50. Most of the\nqueries ranges between 50 and 500.\nCertain Select queuries duration are between 1000 and 2500. And for report\nqueries with more than 3 lakh and 1 lakh rows , the queries takes more than\n6000 ms.\n\n\nAnd: Idle connection don't take any I/O and CPU, just memory resources (and\n> very small network resources).\n>\n> And IHMO killing database processes isn't a solution to your problem.\n> Database server should nearly never be restarted.\n>\n> Ciao,\n> Gerhard\n>\n\n\nRegards\n\nShiva Raman\n\n>\n>\n>\n\nHi Gerhard  Thanks for the mail On Thu, Sep 24, 2009 at 7:19 PM, Gerhard Wiesinger <[email protected]> wrote:\n\n\nHello Shiva,\n\nWhat I see from top (0.0%wa) you don't have any I/O problem but a major\nCPU problem. But this is contrast to iostat where up to 50% of iowait\nis there (sometimes).\n\nI think you have 2 problems:\n1.) Client applications which don't close the connection. If the\napplications wants persistent connections (for performance reasons),\nthen idle postgresql processes are ok. A better approach would be some\nkind of connection pool. What programming language do you use on the\nweb tier?I am using connection pooling on Tomcat Web Server . Total of 500 connections are configured to be handled in the connection pool.  \n\n2.) Find out queries which produce the high CPU load. (e.g. pg_top). I\nguess there are some very suboptimal queries there. (I guess some\nindexes are missing). \n\nYou could e.g. set\nlog_min_duration_statement = 50 # 50ms, all slower queries are\nlogged\nI enabled the min duration statement and i found that allmost ninety percent of queries are logged which has duration more thatn 50. Most of the queries ranges between 50 and 500.Certain Select queuries duration are between 1000 and 2500. And for  report queries with more than 3 lakh and 1 lakh rows , the queries takes more than 6000 ms. \n\nAnd: Idle connection don't take any I/O and CPU, just memory resources\n(and very small network resources).\n\nAnd IHMO killing database processes isn't a solution to your problem.\nDatabase server should nearly never be restarted.\n\nCiao,\nGerhardRegardsShiva Raman", "msg_date": "Fri, 25 Sep 2009 12:47:43 +0530", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Hi Gerhard\n I also found the pg_log has 73 G of data .\n\nclusternode2:/var/lib/pgsql/data # du -sh pg_log/\n73G pg_log/\n\nIs it necessary to keep this Log files? Can i backup the logs and delete it\nfrom the original directory ? Is this logs files necessary in case any data\nrecovery to be done ?\nI am database dumps every day .\n pg_xlog and pg_clog has nearly less than 25 Mb of data only.\n\n\nRegds\n\nShiva Raman\n\nHi Gerhard  I also found the pg_log has 73 G of data . clusternode2:/var/lib/pgsql/data # du -sh pg_log/73G     pg_log/Is it necessary to keep this Log files? Can i backup the logs and delete it from the original directory ? Is this logs files necessary in case any data recovery to be done ? \nI am database dumps every day . pg_xlog and pg_clog has nearly less than 25 Mb of data only. RegdsShiva Raman", "msg_date": "Fri, 25 Sep 2009 13:36:46 +0530", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "On Fri, Sep 25, 2009 at 9:06 AM, Shiva Raman <[email protected]> wrote:\n\n> Hi Gerhard\n> I also found the pg_log has 73 G of data .\n>\n> clusternode2:/var/lib/pgsql/data # du -sh pg_log/\n> 73G pg_log/\n>\n> Is it necessary to keep this Log files? Can i backup the logs and delete it\n> from the original directory ? Is this logs files necessary in case any data\n> recovery to be done ?\n> I am database dumps every day .\n>\nyou're probably logging too much. Change level of logging (log_statement to\nddl for instance), and do 'pg_ctl reload'\n\n\n\n-- \nGJ\n\nOn Fri, Sep 25, 2009 at 9:06 AM, Shiva Raman <[email protected]> wrote:\nHi Gerhard  I also found the pg_log has 73 G of data . clusternode2:/var/lib/pgsql/data # du -sh pg_log/73G     pg_log/Is it necessary to keep this Log files? Can i backup the logs and delete it from the original directory ? Is this logs files necessary in case any data recovery to be done ? \n\nI am database dumps every day .you're probably logging too much. Change level of logging (log_statement to ddl for instance), and do 'pg_ctl reload' \n-- GJ", "msg_date": "Fri, 25 Sep 2009 09:16:25 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "As suggested, i had changed the log_statement='ddl' and now it is logging\nonly\nthe ddl statements . thanks for the tip.\n Can i delete the old log files in pg_log after backing up as zip archive ?\nis it neccesary to keep those log files ?\n\nRegards\n\nShiva Raman\n\n>\n> 2009/9/25 Grzegorz Jaśkiewicz <[email protected]>\n>\n> On Fri, Sep 25, 2009 at 9:06 AM, Shiva Raman <[email protected]>wrote:\n>\n>> Hi Gerhard\n>> I also found the pg_log has 73 G of data .\n>>\n>> clusternode2:/var/lib/pgsql/data # du -sh pg_log/\n>> 73G pg_log/\n>>\n>> Is it necessary to keep this Log files? Can i backup the logs and delete\n>> it from the original directory ? Is this logs files necessary in case any\n>> data recovery to be done ?\n>> I am database dumps every day .\n>>\n> you're probably logging too much. Change level of logging (log_statement to\n> ddl for instance), and do 'pg_ctl reload'\n>\n>\n>\n> --\n> GJ\n>\n\nAs suggested, i had changed the log_statement='ddl' and now it is logging onlythe ddl statements . thanks for the tip.  Can i delete the old log files in pg_log after backing up as zip archive ? is it neccesary to keep those log files ? \nRegardsShiva Raman 2009/9/25 Grzegorz Jaśkiewicz <[email protected]>\nOn Fri, Sep 25, 2009 at 9:06 AM, Shiva Raman <[email protected]> wrote:\n\nHi Gerhard  I also found the pg_log has 73 G of data . clusternode2:/var/lib/pgsql/data # du -sh pg_log/73G     pg_log/Is it necessary to keep this Log files? Can i backup the logs and delete it from the original directory ? Is this logs files necessary in case any data recovery to be done ? \n\n\nI am database dumps every day .you're probably logging too much. Change level of logging (log_statement to ddl for instance), and do 'pg_ctl reload' \n\n-- GJ", "msg_date": "Fri, 25 Sep 2009 14:25:23 +0530", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "2009/9/25 Shiva Raman <[email protected]>\n\n> As suggested, i had changed the log_statement='ddl' and now it is logging\n> only\n> the ddl statements . thanks for the tip.\n> Can i delete the old log files in pg_log after backing up as zip archive ?\n> is it neccesary to keep those log files ?\n>\n\nthey're yours, you can do whatever you wish with em.\npg_logs are just textual log files.\n\npg_xlogs on the other hand, you should never touch (unless using logs\nstorage/shipment for backups/replication).\n\n\n\n\n-- \nGJ\n\n2009/9/25 Shiva Raman <[email protected]>\nAs suggested, i had changed the log_statement='ddl' and now it is logging onlythe ddl statements . thanks for the tip.  Can i delete the old log files in pg_log after backing up as zip archive ? is it neccesary to keep those log files ? \nthey're yours, you can do whatever you wish with em. pg_logs are just textual log files. pg_xlogs on the other hand, you should never touch (unless using logs storage/shipment for backups/replication).\n -- GJ", "msg_date": "Fri, 25 Sep 2009 10:03:05 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Hello Craig,\n\nAre you sure this is correct?\n\nThe test program (see below) with autocommit=0 counts up when an insert is \ndone in \nanother session and there is no commit done.\n\nI think with each new select a new implicit transaction is done when no \nexplicit \"BEGIN\" has been established.\n\nCan one confirm this behavior?\n\nThnx.\n\nCiao,\nGerhard\n\n# Disable autocommit!\nmy $dbh = DBI->connect($con, $dbuser, $dbpass, {RaiseError => 1, \nAutoCommit=>0}) || die \"Unable to access Database '$dbname' on host \n'$dbhost' as user '$dbuser'. Error returned was: \". $DBI::errstr .\"\";\n\nmy $sth = $dbh->prepare('SELECT COUNT(*) FROM employee;');\n\nfor (;;)\n{\n $sth->execute();\n my ($count) = $sth->fetchrow();\n print \"count=$count\\n\";\n $sth->finish();\n# $dbh->commit;\n sleep(3);\n}\n\n$dbh->disconnect;\n\n--\nhttp://www.wiesinger.com/\n\n\nOn Thu, 24 Sep 2009, Craig James wrote:\n\n> Dave Dutcher wrote:\n>> You need a COMMIT for every BEGIN. If you just run a SELECT statement\n>> without first beginning a transaction, then you should not end up with a\n>> connection that is Idle in Transaction. If you are beginning a \n>> transaction,\n>> doing a select, and then not committing, then yes that is a bug.\n>\n> The BEGIN can be hidden, though. For example, if the application is written \n> in Perl,\n>\n> $dbh = DBI->connect($dsn, $user, $pass, {AutoCommit => 0});\n>\n> will automatically start a transaction the first time you do anything. Under \n> the covers, the Perl DBI issues the BEGIN for you, and you have to do an \n> explicit\n>\n> $dbh->commit();\n>\n> to commit it.\n>\n> Craig\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 25 Sep 2009 15:00:18 +0200 (CEST)", "msg_from": "Gerhard Wiesinger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak\n times!!!!" }, { "msg_contents": "Gerhard Wiesinger wrote:\n> Hello Craig,\n> \n> Are you sure this is correct?\n> \n> The test program (see below) with autocommit=0 counts up when an insert \n> is done in another session and there is no commit done.\n> \n> I think with each new select a new implicit transaction is done when no \n> explicit \"BEGIN\" has been established.\n\nSorry, I should have been more specific. A transaction starts when you do something that will alter data in the database, such as insert, update, alter table, create sequence, and so forth. The Perl DBI won't start a transaction for a select.\n\nBut my basic point is still valid: Some languages like Perl can implicitely start a transaction, so if programmers aren't familiar with this behavior, they can accidentally create long-running transactions.\n\nCraig\n\n \n> Can one confirm this behavior?\n> \n> Thnx.\n> \n> Ciao,\n> Gerhard\n> \n> # Disable autocommit!\n> my $dbh = DBI->connect($con, $dbuser, $dbpass, {RaiseError => 1, \n> AutoCommit=>0}) || die \"Unable to access Database '$dbname' on host \n> '$dbhost' as user '$dbuser'. Error returned was: \". $DBI::errstr .\"\";\n> \n> my $sth = $dbh->prepare('SELECT COUNT(*) FROM employee;');\n> \n> for (;;)\n> {\n> $sth->execute();\n> my ($count) = $sth->fetchrow();\n> print \"count=$count\\n\";\n> $sth->finish();\n> # $dbh->commit;\n> sleep(3);\n> }\n> \n> $dbh->disconnect;\n> \n> -- \n> http://www.wiesinger.com/\n> \n> \n> On Thu, 24 Sep 2009, Craig James wrote:\n> \n>> Dave Dutcher wrote:\n>>> You need a COMMIT for every BEGIN. If you just run a SELECT statement\n>>> without first beginning a transaction, then you should not end up with a\n>>> connection that is Idle in Transaction. If you are beginning a \n>>> transaction,\n>>> doing a select, and then not committing, then yes that is a bug.\n>>\n>> The BEGIN can be hidden, though. For example, if the application is \n>> written in Perl,\n>>\n>> $dbh = DBI->connect($dsn, $user, $pass, {AutoCommit => 0});\n>>\n>> will automatically start a transaction the first time you do \n>> anything. Under the covers, the Perl DBI issues the BEGIN for you, \n>> and you have to do an explicit\n>>\n>> $dbh->commit();\n>>\n>> to commit it.\n>>\n>> Craig\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list \n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n> \n\n", "msg_date": "Fri, 25 Sep 2009 08:22:16 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "Dear all\n with reference to the discussions and valuable suggestions i got from the\nlist, the code has been reviewed and updated with explicit commit . There is\na good improvement in performance .I am also planning to upgrade the\ndatabase from 8.1 to 8.3 /8.4 .\n My current OS is SLES 10 SP3 default ships with postgresql 8.1 .\n The latest version of SLES 11 ships with postgresql 8.3 version.\nI will be upgrading the Postgersql on my SLES 10 SP3 for PPC only.\nI have not seen any prebuilt RPMS of Postgresql 8.3 or 8.4 version for SLES\n10 PPC architecture .\nWhen I tried to build the PPC RPM from Source in the PowerPC, it shows lot\nof dependancies.\n I have decided to install from source - Postgres 8.3 / Postgresql-8.4.\ntar.gz .\n\nIs there any major changes/updates in my 8.1 database i have to take care\nwhile upgrading to postgresql 8.3/ 8.4 ? Is 8.3 or 8.4 the right version\nto upgrade from 8.1 ?\n\nPlease let me know your suggestions.\n\nRegards\n\nShiva Raman .\n\n\nOn Fri, Sep 25, 2009 at 8:52 PM, Craig James <[email protected]>wrote:\n\n> Gerhard Wiesinger wrote:\n>\n>> Hello Craig,\n>>\n>> Are you sure this is correct?\n>>\n>> The test program (see below) with autocommit=0 counts up when an insert is\n>> done in another session and there is no commit done.\n>>\n>> I think with each new select a new implicit transaction is done when no\n>> explicit \"BEGIN\" has been established.\n>>\n>\n> Sorry, I should have been more specific. A transaction starts when you do\n> something that will alter data in the database, such as insert, update,\n> alter table, create sequence, and so forth. The Perl DBI won't start a\n> transaction for a select.\n>\n> But my basic point is still valid: Some languages like Perl can implicitely\n> start a transaction, so if programmers aren't familiar with this behavior,\n> they can accidentally create long-running transactions.\n>\n> Craig\n>\n>\n>\n> Can one confirm this behavior?\n>>\n>> Thnx.\n>>\n>> Ciao,\n>> Gerhard\n>>\n>> # Disable autocommit!\n>> my $dbh = DBI->connect($con, $dbuser, $dbpass, {RaiseError => 1,\n>> AutoCommit=>0}) || die \"Unable to access Database '$dbname' on host\n>> '$dbhost' as user '$dbuser'. Error returned was: \". $DBI::errstr .\"\";\n>>\n>> my $sth = $dbh->prepare('SELECT COUNT(*) FROM employee;');\n>>\n>> for (;;)\n>> {\n>> $sth->execute();\n>> my ($count) = $sth->fetchrow();\n>> print \"count=$count\\n\";\n>> $sth->finish();\n>> # $dbh->commit;\n>> sleep(3);\n>> }\n>>\n>> $dbh->disconnect;\n>>\n>> --\n>> http://www.wiesinger.com/\n>>\n>>\n>> On Thu, 24 Sep 2009, Craig James wrote:\n>>\n>> Dave Dutcher wrote:\n>>>\n>>>> You need a COMMIT for every BEGIN. If you just run a SELECT statement\n>>>> without first beginning a transaction, then you should not end up with a\n>>>> connection that is Idle in Transaction. If you are beginning a\n>>>> transaction,\n>>>> doing a select, and then not committing, then yes that is a bug.\n>>>>\n>>>\n>>> The BEGIN can be hidden, though. For example, if the application is\n>>> written in Perl,\n>>>\n>>> $dbh = DBI->connect($dsn, $user, $pass, {AutoCommit => 0});\n>>>\n>>> will automatically start a transaction the first time you do anything.\n>>> Under the covers, the Perl DBI issues the BEGIN for you, and you have to do\n>>> an explicit\n>>>\n>>> $dbh->commit();\n>>>\n>>> to commit it.\n>>>\n>>> Craig\n>>>\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (\n>>> [email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>>\n>>\n>\n\nDear all   with reference to the discussions and valuable suggestions i got from the list, the code has been reviewed and updated with explicit commit . There is a good improvement in  performance .I am also planning to upgrade the database from 8.1 to 8.3 /8.4 . \n   My current OS is SLES 10 SP3 default ships with postgresql 8.1 .   The latest version of SLES 11 ships with postgresql 8.3 version. I will be upgrading the Postgersql on my SLES 10 SP3 for PPC only. I have not seen any prebuilt RPMS of Postgresql 8.3 or 8.4 version for SLES 10  PPC architecture . \nWhen I tried to build the PPC RPM from Source in the PowerPC, it shows lot of dependancies.    I have decided to install from source - Postgres 8.3 / Postgresql-8.4. tar.gz .Is there any major changes/updates in my 8.1 database  i have to take care while  upgrading to postgresql 8.3/ 8.4 ?  Is 8.3 or 8.4 the right version to upgrade from 8.1 ? \nPlease let me know your suggestions. RegardsShiva Raman . On Fri, Sep 25, 2009 at 8:52 PM, Craig James <[email protected]> wrote:\nGerhard Wiesinger wrote:\n\nHello Craig,\n\nAre you sure this is correct?\n\nThe test program (see below) with autocommit=0 counts up when an insert is done in another session and there is no commit done.\n\nI think with each new select a new implicit transaction is done when no explicit \"BEGIN\" has been established.\n\n\nSorry, I should have been more specific.  A transaction starts when you do something that will alter data in the database, such as insert, update, alter table, create sequence, and so forth.  The Perl DBI won't start a transaction for a select.\n\nBut my basic point is still valid: Some languages like Perl can implicitely start a transaction, so if programmers aren't familiar with this behavior, they can accidentally create long-running transactions.\n\nCraig\n\n\n\nCan one confirm this behavior?\n\nThnx.\n\nCiao,\nGerhard\n\n# Disable autocommit!\nmy $dbh = DBI->connect($con, $dbuser, $dbpass, {RaiseError => 1, AutoCommit=>0}) || die \"Unable to access Database '$dbname' on host '$dbhost' as user '$dbuser'. Error returned was: \". $DBI::errstr .\"\";\n\nmy $sth = $dbh->prepare('SELECT COUNT(*) FROM employee;');\n\nfor (;;)\n{\n  $sth->execute();\n  my ($count) = $sth->fetchrow();\n  print \"count=$count\\n\";\n  $sth->finish();\n#  $dbh->commit;\n  sleep(3);\n}\n\n$dbh->disconnect;\n\n-- \nhttp://www.wiesinger.com/\n\n\nOn Thu, 24 Sep 2009, Craig James wrote:\n\n\nDave Dutcher wrote:\n\nYou need a COMMIT for every BEGIN.  If you just run a SELECT statement\nwithout first beginning a transaction, then you should not end up with a\nconnection that is Idle in Transaction.  If you are beginning a transaction,\ndoing a select, and then not committing, then yes that is a bug.\n\n\nThe BEGIN can be hidden, though.  For example, if the application is written in Perl,\n\n$dbh = DBI->connect($dsn, $user, $pass, {AutoCommit => 0});\n\nwill automatically start a transaction the first time you do anything.  Under the covers, the Perl DBI issues the BEGIN for you, and you have to do an explicit\n\n$dbh->commit();\n\nto commit it.\n\nCraig\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 9 Oct 2009 12:41:26 +0530", "msg_from": "Shiva Raman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" }, { "msg_contents": "On Fri, Oct 9, 2009 at 3:11 AM, Shiva Raman <[email protected]> wrote:\n> Dear all\n>   with reference to the discussions and valuable suggestions i got from the\n> list, the code has been reviewed and updated with explicit commit . There is\n> a good improvement in  performance .I am also planning to upgrade the\n> database from 8.1 to 8.3 /8.4 .\n>    My current OS is SLES 10 SP3 default ships with postgresql 8.1 .\n>   The latest version of SLES 11 ships with postgresql 8.3 version.\n> I will be upgrading the Postgersql on my SLES 10 SP3 for PPC only.\n> I have not seen any prebuilt RPMS of Postgresql 8.3 or 8.4 version for SLES\n> 10  PPC architecture .\n> When I tried to build the PPC RPM from Source in the PowerPC, it shows lot\n> of dependancies.\n>    I have decided to install from source - Postgres 8.3 / Postgresql-8.4.\n> tar.gz .\n>\n> Is there any major changes/updates in my 8.1 database  i have to take care\n> while  upgrading to postgresql 8.3/ 8.4 ?  Is 8.3 or 8.4 the right version\n> to upgrade from 8.1 ?\n>\n> Please let me know your suggestions.\n\n\nThe 'big picture' issues:\n*) Test your postgresql.conf first. Some settings have changed or have\nbeen removed (like fsm).\n*) Many implicit casts to text were removed. Essentially the server is\nless tolerant of sql that many would consider buggy\n*) autovacuum is now on by default\n\nand, most importantly:\n*) sit back and enjoy the speed :-)\n\nregarding 8.3/8.4, it's a tough call. 8.4 has a better chance of\nbeing supported by in place upgrade in the future, so i'd start there.\n\nmerlin\n", "msg_date": "Fri, 9 Oct 2009 09:27:23 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU load on Postgres Server during Peak times!!!!" } ]
[ { "msg_contents": "Hey Everyone,\n So, I have a nice postgreSQL server (8.4) up and running our \ndatabase. I even managed to get master->slave going without trouble \nusing the excellent skytools.. however, I want to maximize speed and the \nhot updates where possible, so, I am wanting to prune unused indexes \nfrom the database.\n\n is it as simple as taking the output from ; select indexrelname from \npg_stat_user_indexes where idx_scan = 0 and idx_tup_read = 0 and \nidx_tup_fetch = 0 ;\n\n And .. dropping ?\n\n\n The reason I ask is, well, the count on that gives me 750 indexes \nwhere-as the count on all user_indexes is 1100. About 2/3rds of them are \nobsolete ? I did do an ETL from mySQL -> postgreSQL but.. that's still a \nridiculous amount of (potentially) unused indexes.\n\n Regards\n Stef\n", "msg_date": "Tue, 22 Sep 2009 10:05:30 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": true, "msg_subject": "Hunting Unused Indexes .. is it this simple ?" }, { "msg_contents": "On Tue, Sep 22, 2009 at 7:35 PM, Stef Telford <[email protected]> wrote:\n\n> Hey Everyone,\n> So, I have a nice postgreSQL server (8.4) up and running our database. I\n> even managed to get master->slave going without trouble using the excellent\n> skytools.. however, I want to maximize speed and the hot updates where\n> possible, so, I am wanting to prune unused indexes from the database.\n>\n> is it as simple as taking the output from ; select indexrelname from\n> pg_stat_user_indexes where idx_scan = 0 and idx_tup_read = 0 and\n> idx_tup_fetch = 0 ;\n>\n> And .. dropping ?\n>\n>\n> The reason I ask is, well, the count on that gives me 750 indexes\n> where-as the count on all user_indexes is 1100. About 2/3rds of them are\n> obsolete ? I did do an ETL from mySQL -> postgreSQL but.. that's still a\n> ridiculous amount of (potentially) unused indexes.\n>\n>\nYes, those numbers can be used reliably to identify unused indexes.\n\nBest regards,\n-- \nCall it Postgres\n\nEnterpriseDB http://www.enterprisedb.com\n\ngurjeet[.singh]@EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com\nTwitter: singh_gurjeet\nSkype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nOn Tue, Sep 22, 2009 at 7:35 PM, Stef Telford <[email protected]> wrote:\n\nHey Everyone,\n   So, I have a nice postgreSQL server (8.4) up and running our database. I even managed to get master->slave going without trouble using the excellent skytools.. however, I want to maximize speed and the hot updates where possible, so, I am wanting to prune unused indexes from the database.\n\n   is it as simple as taking the output from ; select indexrelname from pg_stat_user_indexes where idx_scan = 0 and idx_tup_read = 0 and idx_tup_fetch = 0 ;\n\n   And  .. dropping ?\n\n\n   The reason I ask is, well, the count on that gives me 750 indexes where-as the count on all user_indexes is 1100. About 2/3rds of them are obsolete ? I did do an ETL from mySQL -> postgreSQL but.. that's still a ridiculous amount of (potentially) unused indexes.\nYes, those numbers can be used reliably to identify unused indexes.Best regards,-- Call it PostgresEnterpriseDB      http://www.enterprisedb.com\ngurjeet[.singh]@EnterpriseDB.comsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.comTwitter: singh_gurjeetSkype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Tue, 22 Sep 2009 20:05:55 +0530", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hunting Unused Indexes .. is it this simple ?" }, { "msg_contents": "Stef Telford wrote:\n> Hey Everyone,\n> So, I have a nice postgreSQL server (8.4) up and running our \n> database. I even managed to get master->slave going without trouble \n> using the excellent skytools.. however, I want to maximize speed and the \n> hot updates where possible, so, I am wanting to prune unused indexes \n> from the database.\n> \n> is it as simple as taking the output from ; select indexrelname from \n> pg_stat_user_indexes where idx_scan = 0 and idx_tup_read = 0 and \n> idx_tup_fetch = 0 ;\n> \n> And .. dropping ?\n> \n> \n> The reason I ask is, well, the count on that gives me 750 indexes \n> where-as the count on all user_indexes is 1100. About 2/3rds of them are \n> obsolete ? I did do an ETL from mySQL -> postgreSQL but.. that's still a \n> ridiculous amount of (potentially) unused indexes.\n> \n> Regards\n> Stef\n> \n\nDid you google that? I recall seeing some posts like that on planet \npostgres.\n\nYea, here it is:\n\nhttp://radek.cc/2009/09/05/psqlrc-tricks-indexes/\n\ngoogle turns up several for \"postgres unused indexes\". I havent read \nany of the others, not sure how good they are.\n\n-Andy\n", "msg_date": "Tue, 22 Sep 2009 09:37:57 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hunting Unused Indexes .. is it this simple ?" }, { "msg_contents": "Stef,\n\n>> is it as simple as taking the output from ; select indexrelname\n>> from pg_stat_user_indexes where idx_scan = 0 and idx_tup_read = 0 and\n>> idx_tup_fetch = 0 ;\n>>\n>> And .. dropping ?\n\nAlmost that simple. The caveat is that indexes which are only used for\nthe enforcement of unique constraints (or other constraints) don't\ncount, but you don't want to drop them because they're required for the\nconstraints to work.\n\nAlso, if you have a large index with very low (but non-zero) scans, you\nprobably want to drop that as well.\n\nFull query for that is here:\nhttp://it.toolbox.com/blogs/database-soup/finding-useless-indexes-28796\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Tue, 22 Sep 2009 16:52:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hunting Unused Indexes .. is it this simple ?" }, { "msg_contents": "I have a very busy system that takes about 9 million inserts per day and each record gets updated at least once after the insert (all for the one same table), there are other tables that get hit but not as severely. As suspected I am having a problem with table bloat. Any advice on how to be more aggressive with autovacuum? I am using 8.4.1. My machine has 4 Intel Xeon 3000 MHz Processors with 8 GB of Ram.\n\nCurrently I am using only defaults for autovac.\n\nshared_buffers = 768MB # min 128kB\nwork_mem = 1MB # min 64kB\nmaintenance_work_mem = 384MB\n\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\n#autovacuum = on\n\n#log_autovacuum_min_duration = -1\n\n\n\n#autovacuum_max_workers = 3\n#autovacuum_naptime = 1min\n#autovacuum_vacuum_threshold = 50\n\n#autovacuum_analyze_threshold = 50\n\n#autovacuum_vacuum_scale_factor = 0.2\n#autovacuum_analyze_scale_factor = 0.1\n#autovacuum_freeze_max_age = 200000000\n\n#autovacuum_vacuum_cost_delay = 20ms\n\n\n#autovacuum_vacuum_cost_limit = -1\n\n\n", "msg_date": "Sun, 28 Feb 2010 21:09:04 -0600", "msg_from": "\"Plugge, Joe R.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Autovacuum Tuning advice" }, { "msg_contents": "On Sun, Feb 28, 2010 at 8:09 PM, Plugge, Joe R. <[email protected]> wrote:\n> I have a very busy system that takes about 9 million inserts per day and each record gets updated at least once after the insert (all for the one same table), there are other tables that get hit but not as severely.  As suspected I am having a problem with table bloat.  Any advice on how to be more aggressive with autovacuum?  I am using 8.4.1.  My machine has 4 Intel Xeon  3000 MHz Processors with 8 GB of Ram.\n\nWhat kind of drive system do you have? That's far more important than\nCPU and RAM.\n\nLet's look at a two pronged attack. 1: What can you maybe do to\nreduce the number of updates for each row. if you do something like:\n\nupdate row set field1='xyz' where id=4;\nupdate row set field2='www' where id=4;\n\nAnd you can combine those updates, that's a big savings.\n\nCan you benefit from HOT updates by removing some indexes? Updating\nindexed fields can cost a fair bit more than updating indexed ones IF\nyou have a < 100% fill factor and therefore free room in each page for\na few extra rows.\n\n2: Vacuum tuning.\n\n>\n> Currently I am using only defaults for autovac.\n\nThis one:\n\n> #autovacuum_vacuum_cost_delay = 20ms\n\nis very high for a busy system with a powerful io subsystem. I run my\nproduction servers with 1ms to 4ms so they can keep up.\n\nLastly there are some settings you can make per table for autovac you\ncan look into (i.e. set cost_delay to 0 for this table), or you can\nturn off autovac for this one table and then run a regular vac with no\ncost_delay on it every minute or two.\n", "msg_date": "Sun, 28 Feb 2010 23:58:28 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum Tuning advice" }, { "msg_contents": "Joe wrote:\n\n\n> I have a very busy system that takes about 9 million inserts per day and each record gets\n> updated at least once after the insert (all for the one same table), there are other tables that\n> get hit but not as severely. As suspected I am having a problem with table bloat. Any advice\n> on how to be more aggressive with autovacuum? I am using 8.4.1. My machine has 4 Intel\n> Xeon 3000 MHz Processors with 8 GB of Ram.\n> \n> Currently I am using only defaults for autovac.\n> \n> shared_buffers = 768MB # min 128kB\n> work_mem = 1MB # min 64kB\n> maintenance_work_mem = 384MB\n\n<snip of default config settings>\n\n\nOperating system ?\n\nAny messages in logs ?\n\nGreg W.\n\n\n \n", "msg_date": "Sun, 28 Feb 2010 23:08:02 -0800 (PST)", "msg_from": "Greg Williamson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum Tuning advice" }, { "msg_contents": "Sorry, additional info:\n\nOS is Red Hat Enterprise Linux ES release 4 (Nahant Update 5)\n\nDISK - IBM DS4700 Array - 31 drives and 1 hot spare - RAID10 - 32MB stripe\n\nSysctl.conf\nkernel.shmmax=6442450944\nkernel.shmall=1887436\nkernel.msgmni=1024\nkernel.msgmnb=65536\nkernel.msgmax=65536\nkernel.sem=250 256000 32 1024\n\nProblem Child table: This table is partitioned so that after the data has rolled past 30 days, I can just drop the table.\n\n\n Table \"public.log_events_y2010m02\"\n Column | Type | Modifiers\n---------------+--------------------------------+-----------\n callseq | character varying(32) | not null\n eventid | character varying(40) | not null\n msgseq | character varying(32) | not null\n eventdate | timestamp(0) without time zone | not null\n hollyid | character varying(20) |\n ownerid | character varying(60) |\n spownerid | character varying(60) |\n applicationid | character varying(60) |\n clid | character varying(40) |\n dnis | character varying(40) |\n param | character varying(2000) |\n docid | character varying(40) |\nIndexes:\n \"log_events_y2010m02_pk\" PRIMARY KEY, btree (callseq, msgseq)\n \"loev_eventid_idx_y2010m02\" btree (eventid)\n \"loev_ownerid_cidx_y2010m02\" btree (ownerid, spownerid)\nCheck constraints:\n \"log_events_y2010m02_eventdate_check\" CHECK (eventdate >= '2010-02-01'::date AND eventdate < '2010-03-01'::date)\nInherits: log_events\n\n\nParent Table:\n\n Table \"public.log_events\"\n Column | Type | Modifiers\n---------------+--------------------------------+-----------\n callseq | character varying(32) | not null\n eventid | character varying(40) | not null\n msgseq | character varying(32) | not null\n eventdate | timestamp(0) without time zone | not null\n hollyid | character varying(20) |\n ownerid | character varying(60) |\n spownerid | character varying(60) |\n applicationid | character varying(60) |\n clid | character varying(40) |\n dnis | character varying(40) |\n param | character varying(2000) |\n docid | character varying(40) |\nTriggers:\n insert_log_events_trigger BEFORE INSERT ON log_events FOR EACH ROW EXECUTE PROCEDURE insert_log_events()\n\n\nschemaname | tablename | size_pretty | total_size_pretty\n------------+--------------------------------+-------------+-------------------\n public | log_events_y2010m02 | 356 GB | 610 GB\n\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Monday, March 01, 2010 12:58 AM\nTo: Plugge, Joe R.\nCc: [email protected]\nSubject: Re: [PERFORM] Autovacuum Tuning advice\n\nOn Sun, Feb 28, 2010 at 8:09 PM, Plugge, Joe R. <[email protected]> wrote:\n> I have a very busy system that takes about 9 million inserts per day and each record gets updated at least once after the insert (all for the one same table), there are other tables that get hit but not as severely.  As suspected I am having a problem with table bloat.  Any advice on how to be more aggressive with autovacuum?  I am using 8.4.1.  My machine has 4 Intel Xeon  3000 MHz Processors with 8 GB of Ram.\n\nWhat kind of drive system do you have? That's far more important than\nCPU and RAM.\n\nLet's look at a two pronged attack. 1: What can you maybe do to\nreduce the number of updates for each row. if you do something like:\n\nupdate row set field1='xyz' where id=4;\nupdate row set field2='www' where id=4;\n\nAnd you can combine those updates, that's a big savings.\n\nCan you benefit from HOT updates by removing some indexes? Updating\nindexed fields can cost a fair bit more than updating indexed ones IF\nyou have a < 100% fill factor and therefore free room in each page for\na few extra rows.\n\n2: Vacuum tuning.\n\n>\n> Currently I am using only defaults for autovac.\n\nThis one:\n\n> #autovacuum_vacuum_cost_delay = 20ms\n\nis very high for a busy system with a powerful io subsystem. I run my\nproduction servers with 1ms to 4ms so they can keep up.\n\nLastly there are some settings you can make per table for autovac you\ncan look into (i.e. set cost_delay to 0 for this table), or you can\nturn off autovac for this one table and then run a regular vac with no\ncost_delay on it every minute or two.\n", "msg_date": "Mon, 1 Mar 2010 06:39:27 -0600", "msg_from": "\"Plugge, Joe R.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum Tuning advice" }, { "msg_contents": "storing all fields as varchar surely doesn't make:\n- indicies small,\n- the thing fly,\n- tables small.\n\n...\n\nstoring all fields as varchar surely doesn't make:- indicies small,- the thing fly,- tables small....", "msg_date": "Mon, 1 Mar 2010 12:50:58 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum Tuning advice" }, { "msg_contents": "Sorry, this is a “black box” application, I am bound by what they give me as far as table layout, but I fully understand the rationale. I believe this application spent its beginnings with Oracle, which explains the blanket use of VARCHAR.\r\n\r\nFrom: Grzegorz Jaśkiewicz [mailto:[email protected]]\r\nSent: Monday, March 01, 2010 6:51 AM\r\nTo: Plugge, Joe R.\r\nCc: Scott Marlowe; [email protected]\r\nSubject: Re: [PERFORM] Autovacuum Tuning advice\r\n\r\nstoring all fields as varchar surely doesn't make:\r\n- indicies small,\r\n- the thing fly,\r\n- tables small.\r\n\r\n...\r\n\n\n\n\n\n\n\n\n\n\nSorry, this is a “black box” application, I am bound by what they\r\ngive me as far as table layout, but I fully understand the rationale.  I\r\nbelieve this application spent its beginnings with Oracle, which explains the\r\nblanket use of VARCHAR.\n \n\nFrom: Grzegorz Jaśkiewicz\r\n[mailto:[email protected]] \nSent: Monday, March 01, 2010 6:51 AM\nTo: Plugge, Joe R.\nCc: Scott Marlowe; [email protected]\nSubject: Re: [PERFORM] Autovacuum Tuning advice\n\n \nstoring all fields as varchar\r\nsurely doesn't make:\r\n- indicies small,\r\n- the thing fly,\r\n- tables small.\n\r\n...", "msg_date": "Mon, 1 Mar 2010 06:57:58 -0600", "msg_from": "\"Plugge, Joe R.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum Tuning advice" } ]
[ { "msg_contents": "Hello postgres wizards,\n\nWe recently upgraded from 8.1.5 to 8.4\nWe have a query (slow_query.sql) which took about 9s on 8.1.5\nOn 8.4, the same query takes 17.7 minutes.\n\nThe code which generated this query is written to support the\ncalculation of arbitrary arithmetic expressions across \"variables\" and\n\"data\" within our application. The example query is a sum of three\n\"variables\", but please note that because the code supports arbitrary\narithmetic, we do not use an aggregate function like sum()\n\nWe have collected as much information as we could and zipped it up here:\n\nhttp://pgsql.privatepaste.com/download/a3SdI8j2km\n\nThank you very much in advance for any suggestions you may have,\nJared Beck\n\n-- \n------------------\nJared Beck\nWeb Developer\nSinglebrook Technology\n(607) 330-1493\[email protected]\n", "msg_date": "Wed, 23 Sep 2009 12:38:20 -0400", "msg_from": "Jared Beck <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query after upgrade to 8.4" }, { "msg_contents": "Hello postgres wizards,\n\nWe recently upgraded from 8.1.5 to 8.4\nWe have a query (slow_query.sql) which took about 9s on 8.1.5\nOn 8.4, the same query takes 17.7 minutes.\n\nThe code which generated this query is written to support the\ncalculation of arbitrary arithmetic expressions across \"variables\" and\n\"data\" within our application.  The example query is a sum of three\n\"variables\", but please note that because the code supports arbitrary\narithmetic, we do not use an aggregate function like sum()\n\nWe have collected as much information as we could and zipped it up here:\n\nhttp://pgsql.privatepaste.com/download/a3SdI8j2km\n\nThank you very much in advance for any suggestions you may have,\nJared Beck\n\n--\n------------------\nJared Beck\nWeb Developer\nSinglebrook Technology\n(607) 330-1493\[email protected]\n", "msg_date": "Wed, 23 Sep 2009 16:53:15 -0400", "msg_from": "Jared Beck <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query after upgrade to 8.4" }, { "msg_contents": "Jared --\n\nForgive the top-posting -- a challenged reader.\n\nI see this in the 8.4 analyze:\n Merge Cond: (cli.clientid = dv118488y0.clientid)\n Join Filter: ((dv118488y0.variableid = v118488y0.variableid) AND (dv118488y0.cycleid = c1.cycleid) AND (dv118488y0.unitid = u.unitid))\n -> Nested Loop Left Join (cost=33.20..9756.43 rows=731 width=38) (actual time=0.922..1215.702 rows=85459 loops=1)\n Join Filter: (dv118482y0.clientid = cli.clientid)\n -> Nested Loop (cost=33.20..697.60 rows=731 width=36) (actual time=0.843..124.942 rows=85459 loops=1)\n\nAnd am wondering about the divergent estimates vs real numbers - and you say you analyze regularly ? Do both 8.1 and 8.4 instances have the same autovac settings ? Maybe one is reacting better to daily traffic ? Might be some new part of the planner which is being wonky, I suppose, but I don't understand enough about it to say.\n\nMight also be some automatic casts that were eliminated between 8.1 and 8.4 -- I don't see any offhand but you should check all such values (string to int i particular).\n\nHTH,\n\nGreg W.\n\n\n\n\n----- Original Message ----\nFrom: Jared Beck <[email protected]>\nTo: [email protected]\nCc: Leon Miller-Out <[email protected]>\nSent: Wednesday, September 23, 2009 12:53:15 PM\nSubject: [PERFORM] Slow query after upgrade to 8.4\n\nHello postgres wizards,\n\nWe recently upgraded from 8.1.5 to 8.4\nWe have a query (slow_query.sql) which took about 9s on 8.1.5\nOn 8.4, the same query takes 17.7 minutes.\n\nThe code which generated this query is written to support the\ncalculation of arbitrary arithmetic expressions across \"variables\" and\n\"data\" within our application. The example query is a sum of three\n\"variables\", but please note that because the code supports arbitrary\narithmetic, we do not use an aggregate function like sum()\n\nWe have collected as much information as we could and zipped it up here:\n\nhttp://pgsql.privatepaste.com/download/a3SdI8j2km\n\nThank you very much in advance for any suggestions you may have,\nJared Beck\n\n--\n------------------\nJared Beck\nWeb Developer\nSinglebrook Technology\n(607) 330-1493\[email protected]\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \n", "msg_date": "Wed, 23 Sep 2009 16:22:45 -0700 (PDT)", "msg_from": "Greg Williamson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query after upgrade to 8.4" }, { "msg_contents": "Jared Beck <[email protected]> writes:\n> Hello postgres wizards,\n> We recently upgraded from 8.1.5 to 8.4\n> We have a query (slow_query.sql) which took about 9s on 8.1.5\n> On 8.4, the same query takes 17.7 minutes.\n\nOne thing that is hobbling the performane on 8.4 is that you have\nwork_mem set to only 1MB (you had it considerably higher on 8.1).\nThis is causing that sort step to spill to disk, which isn't helping\nits rescan performance one bit.\n\nOther things you might try include increasing join_collapse_limit\nto 12 or so, and reducing random_page_cost. The fact that the 8.1\nplan didn't completely suck indicates that your database must be\nmostly in cache, so the default random_page_cost is probably too high\nto model its behavior well.\n\nAnother thing to look into is whether you can't get it to make a\nbetter estimate for this:\n\n -> Index Scan using index_tbldata_variableid on tbldata dv118488y0 (cost=0.00..5914.49 rows=8 width=22) (actual time=1.555..209.856 rows=16193 loops=1)\n Index Cond: (variableid = 118488)\n Filter: (castbooltoint((((value)::text ~ '^-?[0-9]*([0-9]+.|.[0-9]+)?[0-9]*([Ee][-+]d*)?$'::text) AND ((value)::text <> '-'::text))) = 1)\n\nBeing off by a factor of 2000 on a first-level rowcount estimate is\nalmost inevitably a ticket to a bad join plan. I doubt that the\ncondition on variableid is being that badly estimated; the problem is\nthe filter condition. Whatever possessed you to take a perfectly good\nboolean condition and wrap it in \"castbooltoint(condition) = 1\"?\nI'm not sure how good the estimate would be anyway for the LIKE\ncondition, but that bit of obscurantism isn't helping.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Sep 2009 22:35:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query after upgrade to 8.4 " }, { "msg_contents": "> Hello postgres wizards,\n>\n> We recently upgraded from 8.1.5 to 8.4\n> We have a query (slow_query.sql) which took about 9s on 8.1.5\n> On 8.4, the same query takes 17.7 minutes.\n>\n> The code which generated this query is written to support the\n> calculation of arbitrary arithmetic expressions across \"variables\" and\n> \"data\" within our application. The example query is a sum of three\n> \"variables\", but please note that because the code supports arbitrary\n> arithmetic, we do not use an aggregate function like sum()\n>\n> We have collected as much information as we could and zipped it up here:\n>\n> http://pgsql.privatepaste.com/download/a3SdI8j2km\n>\n> Thank you very much in advance for any suggestions you may have,\n> Jared Beck\n\nTom Lane already replied, so I'm posting just parsed explain plans - I've\ncreated that before noticing the reply, and I think it might be useful.\n\ngood (8.1): http://explain.depesz.com/s/1dT\nbad (8.4): http://explain.depesz.com/s/seT\n\nAs you can see, the real problem is the 'index scan / sort'.\n\nregards\nTomas\n\n", "msg_date": "Thu, 24 Sep 2009 11:56:12 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Slow query after upgrade to 8.4" }, { "msg_contents": "On Wed, Sep 23, 2009 at 10:35 PM, Tom Lane <[email protected]> wrote:\n>\n> One thing that is hobbling the performane on 8.4 is that you have\n> work_mem set to only 1MB\n>\n> Other things you might try include increasing join_collapse_limit\n> to 12 or so, and reducing random_page_cost.\n>\n> Another thing to look into is whether you can't get it to make a\n> better estimate for this:\n>\n>                     ->  Index Scan using index_tbldata_variableid on tbldata dv118488y0  (cost=0.00..5914.49 rows=8 width=22) (actual time=1.555..209.856 rows=16193 loops=1)\n>                           Index Cond: (variableid = 118488)\n>                           Filter: (castbooltoint((((value)::text ~ '^-?[0-9]*([0-9]+.|.[0-9]+)?[0-9]*([Ee][-+]d*)?$'::text) AND ((value)::text <> '-'::text))) = 1)\n>\n> Being off by a factor of 2000 on a first-level rowcount estimate is\n> almost inevitably a ticket to a bad join plan.  I doubt that the\n> condition on variableid is being that badly estimated; the problem is\n> the filter condition.  Whatever possessed you to take a perfectly good\n> boolean condition and wrap it in \"castbooltoint(condition) = 1\"?\n> I'm not sure how good the estimate would be anyway for the LIKE\n> condition, but that bit of obscurantism isn't helping.\n>\n>                        regards, tom lane\n>\n\nAfter following all of Tom's suggestions, the query is now executing\nin about one minute instead of seventeen minutes. Thanks, Tom.\n\nIn case you were curious, after removing the confusing call to\ncastbooltoint() the row estimate increased from the vastly incorrect 8\nrows to the moderately incorrect 1000 rows (compared to the actual\n16193 rows)\n\nShould we try to improve statistics collection for that column\n(variableid) by using ALTER TABLE ... ALTER COLUMN ... SET STATISTICS?\n In other words, if the row estimate were perfect would we be likely\nto get a better plan? Or is that impossible to speculate on?\n\nThanks again. Already you've been a big help. We love postgres and\nare very happy with our upgrade to 8.4 so far!\n-Jared\n\n-- \n------------------\nJared Beck\nWeb Developer\nSinglebrook Technology\n(607) 330-1493\[email protected]\n", "msg_date": "Thu, 24 Sep 2009 08:22:52 -0400", "msg_from": "Jared Beck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query after upgrade to 8.4" }, { "msg_contents": "On Thu, Sep 24, 2009 at 8:22 AM, Jared Beck <[email protected]> wrote:\n> Should we try to improve statistics collection for that column\n> (variableid) by using ALTER TABLE ... ALTER COLUMN ... SET STATISTICS?\n\nIt's worth a try, but I'm not sure it's going to help much. The LIKE\ncondition is hard for the planner to estimate at present.\n\n>  In other words, if the row estimate were perfect would we be likely\n> to get a better plan?  Or is that impossible to speculate on?\n\nGood row estimates are the key to happiness, but I don't know whether\nit will actually change the plan in this instance.\n\n> Thanks again.  Already you've been a big help.  We love postgres and\n> are very happy with our upgrade to 8.4 so far!\n\nGlad to hear it.\n\n...Robert\n", "msg_date": "Sun, 27 Sep 2009 14:45:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query after upgrade to 8.4" } ]
[ { "msg_contents": "Hello-\n\nI've discovered that lookups on one column in one instance of my \ndatabase performs badly.\n\nThe table has columns 'email' and 'key', both of type 'character \nvarying(255)', and both with btree indices. The table has ~ 500k \nrows, and no rows of either column are blank or null, and all values \nare different.\n\n\\d users (abbreviated)\n Table \"public.users\"\n Column | Type \n| Modifiers\n----------------------+----------------------------- \n+----------------------------------------------------\n id | integer | not null \ndefault nextval('users_id_seq'::regclass)\n password | character varying(40) | not null\n email | character varying(255) | not null\n key | character varying(255) |\n...\nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id)\n \"index_users_on_email\" UNIQUE, btree (email)\n \"users_key_index\" btree (key)\n \"xxx\" btree (email)\n\nOn the main production database, a select looking at the email column \nwinds up scanning the whole table:\n\nEXPLAIN ANALYZE SELECT * FROM users WHERE (users.email = 'example.com');\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Seq Scan on users (cost=0.00..21097.90 rows=1 width=793) (actual \ntime=186.692..186.692 rows=0 loops=1)\n Filter: ((email)::text = 'example.com'::text)\n Total runtime: 186.735 ms\n(3 rows)\n\n... where on that same database selecting on the 'key' column uses the \nindex as expected:\n\nEXPLAIN ANALYZE SELECT * FROM users WHERE (users.key = 'example.com');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using users_key_index on users (cost=0.00..6.38 rows=1 \nwidth=793) (actual time=0.021..0.021 rows=0 loops=1)\n Index Cond: ((key)::text = 'example.com'::text)\n Total runtime: 0.061 ms\n(3 rows)\n\nWe're running postgresql 8.3 on solaris with 8G of RAM on a sun X4100 \nconnected to a battery-backed sun disk shelf.\n\nselect version(); reports: PostgreSQL 8.3.3 64-bit on i386-pc- \nsolaris2.11, compiled by /opt/SUNWspro.40/SS11/bin/cc -Xa\n\nWe have test databases which are restored (pg_dump/pg_restore) backups \nof this data, and on these the select on 'email' uses the index as \nexpected.\n\nDropping and re-adding that 'index_users_on_email' had no effect.\n\nSpelunking through our logs we seem to have had this problem as far \nback as I can practically go, so I can't look at any changes that \nmight be suspicious.\n\nWe did try adding a new column (cleverly named email2) and copying the \ndata (update users set email2=email) and adding the appropriate index \nand the query performed quickly. So we can fix the immediate problem, \nbut I'd feel more comfortable understanding it.\n\nDo folks on this list have suggestions for how to further diagnose this?\n\nThanks in advance,\n-Bill Kirtley\n", "msg_date": "Wed, 23 Sep 2009 18:28:46 -0400", "msg_from": "Bill Kirtley <[email protected]>", "msg_from_op": true, "msg_subject": "Use of sequence rather than index scan for one text column on one\n\tinstance of a database" }, { "msg_contents": "Bill Kirtley <[email protected]> writes:\n> On the main production database, a select looking at the email column \n> winds up scanning the whole table:\n> ... where on that same database selecting on the 'key' column uses the \n> index as expected:\n\nThat's just bizarre. I assume that setting enable_seqscan = off\ndoesn't persuade it to use the index either?\n\n> Dropping and re-adding that 'index_users_on_email' had no effect.\n\nHow did you do that exactly? A regular CREATE INDEX, or did you\nuse CREATE INDEX CONCURRENTLY? If the latter, please show the output\nfrom\nselect xmin,* from pg_index where indexrelid = 'index_users_on_email'::regclass;\n\nI notice you have two indexes on email:\n\n> Indexes:\n> \"users_pkey\" PRIMARY KEY, btree (id)\n> \"index_users_on_email\" UNIQUE, btree (email)\n> \"users_key_index\" btree (key)\n> \"xxx\" btree (email)\n\nI can't think why that would be a problem, but does getting rid of\nthe \"xxx\" one make a difference?\n\n> We have test databases which are restored (pg_dump/pg_restore) backups \n> of this data, and on these the select on 'email' uses the index as \n> expected.\n\nAre the test machines using the exact same Postgres executables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Sep 2009 22:53:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of sequence rather than index scan for one text column on one\n\tinstance of a database" }, { "msg_contents": "Bill Kirtley <[email protected]> writes:\n> For what it's worth:\n\n> select xmin,* from pg_index where indexrelid = \n> 'users_key_index'::regclass;\n> xmin | indexrelid | indrelid | indnatts | indisunique | indisprimary \n> | indisclustered | indisvalid | indcheckxmin | indisready | indkey | \n> indclass | indoption | indexprs | indpred\n> ------+------------+----------+----------+-------------+-------------- \n> +----------------+------------+--------------+------------+-------- \n> +----------+-----------+----------+---------\n> 1006 | 15076176 | 17516 | 1 | f | f \n> | f | t | f | t | 10 \n> | 10042 | 0 | |\n\nUh ... 'users_key_index'? What's that? What would actually be the most\nuseful is to compare the pg_index entries for the working and\nnon-working indexes (the ones on email and key).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Sep 2009 11:34:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of sequence rather than index scan for one text column on one\n\tinstance of a database" }, { "msg_contents": "Bill Kirtley <[email protected]> writes:\n> select xmin,* from pg_index where indexrelid = \n> 'index_users_on_email'::regclass;\n> xmin | indexrelid | indrelid | indnatts | indisunique | \n> indisprimary | indisclustered | indisvalid | indcheckxmin | indisready \n> | indkey | indclass | indoption | indexprs | indpred\n> ----------+------------+----------+----------+------------- \n> +--------------+----------------+------------+-------------- \n> +------------+--------+----------+-----------+----------+---------\n> 12651453 | 24483560 | 17516 | 1 | t | \n> f | f | t | t | t \n> | 6 | 10042 | 0 | |\n> (1 row)\n\nOkay, the basic cause of the issue is now clear: the index has\nindcheckxmin true, which means it's not usable until local\nTransactionXmin exceeds the tuple's xmin (12651453 here). This\nis all a pretty unsurprising consequence of the HOT optimizations\nadded in 8.3. The question is why that state persisted long\nenough to be a problem. Perhaps you have long-running background\ntransactions? TransactionXmin is normally the oldest XID that was\nrunning when your own transaction started, so basically the index\nisn't usable until all transactions that were running while it\nwas built complete. I had been thinking that this only happened\nfor concurrent index builds, but actually regular builds can be\nsubject to it as well.\n\nWe've seen some complaints about this behavior before. I wonder if\nthere's a way to work a bit harder to avoid the indcheckxmin labeling\n--- right now the code is pretty conservative about setting that bit\nif there's any chance at all of an invalid HOT chain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Sep 2009 12:26:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of sequence rather than index scan for one text column on one\n\tinstance of a database" }, { "msg_contents": "Hi Tom-\n\nThanks for the response, but I'm not sure what to do with it.\n\nAre you suggesting we might have some transaction (or residue) that's \nhanging around and causing this problem?\n\nWe do have transactions that run on the order of a couple minutes at \ntimes. In the past, under heavy db load they have piled up on top of \neach other, but as far as I can tell they've finished.\n\nIs there something more I can look at to try and diagnose this? As I \nmentioned we do have a workaround of copying the column and building \nan index on the new column ... is it time to take that step? And if \nso, should we be monitoring for this sort of thing on an ongoing basis?\n\nAre there different options we can pass to CREATE INDEX to build and \nindex that would be usable? Should we be stopping our application \nwhile applying indices (we don't always) to ensure the db is quiescent \nat the time?\n\nRegards,\n-Bill Kirtley\n\nOn Sep 24, 2009, at 12:26 PM, Tom Lane wrote:\n\n> Bill Kirtley <[email protected]> writes:\n>> select xmin,* from pg_index where indexrelid =\n>> 'index_users_on_email'::regclass;\n>> xmin | indexrelid | indrelid | indnatts | indisunique |\n>> indisprimary | indisclustered | indisvalid | indcheckxmin | \n>> indisready\n>> | indkey | indclass | indoption | indexprs | indpred\n>> ----------+------------+----------+----------+-------------\n>> +--------------+----------------+------------+--------------\n>> +------------+--------+----------+-----------+----------+---------\n>> 12651453 | 24483560 | 17516 | 1 | t |\n>> f | f | t | t | t\n>> | 6 | 10042 | 0 | |\n>> (1 row)\n>\n> Okay, the basic cause of the issue is now clear: the index has\n> indcheckxmin true, which means it's not usable until local\n> TransactionXmin exceeds the tuple's xmin (12651453 here). This\n> is all a pretty unsurprising consequence of the HOT optimizations\n> added in 8.3. The question is why that state persisted long\n> enough to be a problem. Perhaps you have long-running background\n> transactions? TransactionXmin is normally the oldest XID that was\n> running when your own transaction started, so basically the index\n> isn't usable until all transactions that were running while it\n> was built complete. I had been thinking that this only happened\n> for concurrent index builds, but actually regular builds can be\n> subject to it as well.\n>\n> We've seen some complaints about this behavior before. I wonder if\n> there's a way to work a bit harder to avoid the indcheckxmin labeling\n> --- right now the code is pretty conservative about setting that bit\n> if there's any chance at all of an invalid HOT chain.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Mon, 28 Sep 2009 11:54:54 -0400", "msg_from": "Bill Kirtley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use of sequence rather than index scan for one text column on one\n\tinstance of a database" } ]
[ { "msg_contents": "Hi.\n\nI have a transaction running at the database for around 20 hours .. still\nisn't done. But during the last hours it has come to the point where it\nreally hurts performance of \"other queries\".\n\nGiven pg_stat_activity output there seems to be no locks interfering but\nthe overall cpu-usage of all queries continue to rise. iowait numbers are\nalso very low.\n\nWhat can I do to make the system handle other queries better?\n\nPG: 8.2\n\n-- \nJesper\n\n", "msg_date": "Thu, 24 Sep 2009 10:27:46 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Speed while runnning large transactions. " }, { "msg_contents": "> I have a transaction running at the database for around 20 hours .. still\n> isn't done. But during the last hours it has come to the point where it\n> really hurts performance of \"other queries\".\n>\n> Given pg_stat_activity output there seems to be no locks interfering but\n> the overall cpu-usage of all queries continue to rise. iowait numbers are\n> also very low.\n>\n> What can I do to make the system handle other queries better?\n\nCan you post the query? Do you 'vacuum analyze' on a regular basis?\nYou can also post your conf-file and post the last five lines from a\n'vacuum analyze verbose'.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 24 Sep 2009 10:41:44 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "On Thu, Sep 24, 2009 at 9:27 AM, <[email protected]> wrote:\n\n> Hi.\n>\n> I have a transaction running at the database for around 20 hours .. still\n> isn't done. But during the last hours it has come to the point where it\n> really hurts performance of \"other queries\".\n>\n> Given pg_stat_activity output there seems to be no locks interfering but\n> the overall cpu-usage of all queries continue to rise. iowait numbers are\n> also very low.\n>\n> What can I do to make the system handle other queries better?\n>\n> show us explain from the query(s).\nuse select * from pg_stat_activity to find out the state query is in, and\nperhaps which one of the queries it really is.\n\n\n-- \nGJ\n\nOn Thu, Sep 24, 2009 at 9:27 AM, <[email protected]> wrote:\nHi.\n\nI have a transaction running at the database for around 20 hours .. still\nisn't done. But during the last hours it has come to the point where it\nreally hurts performance of \"other queries\".\n\nGiven pg_stat_activity output there seems to be no locks interfering but\nthe overall cpu-usage of all queries continue to rise. iowait numbers are\nalso very low.\n\nWhat can I do to make the system handle other queries better?\nshow us explain from the query(s).use select * from pg_stat_activity to find out the state query is in, and perhaps which one of the queries it really is. -- GJ", "msg_date": "Thu, 24 Sep 2009 09:44:09 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "On Thu, Sep 24, 2009 at 2:27 AM, <[email protected]> wrote:\n> Hi.\n>\n> I have a transaction running at the database for around 20 hours .. still\n> isn't done. But during the last hours it has come to the point where it\n> really hurts performance of \"other queries\".\n\nWhat is your transaction doing during this time?\n\n> Given pg_stat_activity output there seems to be no locks interfering but\n> the overall cpu-usage of all queries continue to rise. iowait numbers are\n> also very low.\n\nWhat does\nselect count(*) from pg_stat_activity where waiting;\nsay?\n\n> What can I do to make the system handle other queries better?\n\nReally kinda depends on what your transaction is doing.\n", "msg_date": "Thu, 24 Sep 2009 03:07:24 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "> On Thu, Sep 24, 2009 at 2:27 AM, <[email protected]> wrote:\n>> Hi.\n>>\n>> I have a transaction running at the database for around 20 hours ..\n>> still\n>> isn't done. But during the last hours it has come to the point where it\n>> really hurts performance of \"other queries\".\n>\n> What is your transaction doing during this time?\n\nIt is a massive DB-update affecting probably 1.000.000 records with a lot\nof roundtrips to the update-application during that.\n\n>> Given pg_stat_activity output there seems to be no locks interfering but\n>> the overall cpu-usage of all queries continue to rise. iowait numbers\n>> are\n>> also very low.\n>\n> What does\n> select count(*) from pg_stat_activity where waiting;\n> say?\n\nThere is no \"particular query\". No indication of locks it just seems that\nhaving the transaction open (with a lot of changes hold in it) has an\nimpact on the general performance. Even without touching the same records.\n\n>> What can I do to make the system handle other queries better?\n>\n> Really kinda depends on what your transaction is doing.\n\ninsert's, updates, delete..\n\n-- \nJesper\n\n\n", "msg_date": "Thu, 24 Sep 2009 13:35:48 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "> On Thu, Sep 24, 2009 at 9:27 AM, <[email protected]> wrote:\n>\n>> Hi.\n>>\n>> I have a transaction running at the database for around 20 hours ..\n>> still\n>> isn't done. But during the last hours it has come to the point where it\n>> really hurts performance of \"other queries\".\n>>\n>> Given pg_stat_activity output there seems to be no locks interfering but\n>> the overall cpu-usage of all queries continue to rise. iowait numbers\n>> are\n>> also very low.\n>>\n>> What can I do to make the system handle other queries better?\n>>\n>> show us explain from the query(s).\n> use select * from pg_stat_activity to find out the state query is in, and\n> perhaps which one of the queries it really is.\n\nI'm actively monitoring pg_stat_activity for potential problems but the\nthread is spending most of the time in the application side. The\ntransaction is holding a large set of inserts/update and delete for the\nDB.\n\n-- \nJesper\n\n\n", "msg_date": "Thu, 24 Sep 2009 13:39:30 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "2009/9/24 <[email protected]>:\n>> On Thu, Sep 24, 2009 at 9:27 AM, <[email protected]> wrote:\n>>\n>>> Hi.\n>>>\n>>> I have a transaction running at the database for around 20 hours ..\n>>> still\n>>> isn't done. But during the last hours it has come to the point where it\n>>> really hurts performance of \"other queries\".\n>>>\n>>> Given pg_stat_activity output there seems to be no locks interfering but\n>>> the overall cpu-usage of all queries continue to rise. iowait numbers\n>>> are\n>>> also very low.\n>>>\n>>> What can I do to make the system handle other queries better?\n>>>\n>>> show us explain from the query(s).\n>> use select * from pg_stat_activity to find out the state query is in, and\n>> perhaps which one of the queries it really is.\n>\n> I'm actively monitoring pg_stat_activity for potential problems but the\n> thread is spending most of the time in the application side. The\n> transaction is holding a large set of inserts/update and delete for the\n> DB.\n\nI don't think there's much you can do about this. Your other\ntransactions are probably slowing down due to accumulation of dead row\nversions that VACUUM can't collect because they are still visible to\nyour long-running transaction.\n\nYou might need to think about replicating some of your data to a\nreporting server.\n\n...Robert\n", "msg_date": "Tue, 29 Sep 2009 17:28:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "On Thu, 24 Sep 2009, [email protected] wrote:\n\n> I have a transaction running at the database for around 20 hours .. still\n> isn't done. But during the last hours it has come to the point where it\n> really hurts performance of \"other queries\".\n\nOpen transactions grab an internal resource named a snapshot that lets \nthem keep a consistent view of the database while running. If the \ntransaction runs for a long time, that snapshot gets further and further \nbehind, and it takes increasingly long to do some operations as a result. \nOne common problem is that VACUUM can't do its normal cleanup for things \nthat happened since the long running transaction began.\n\nI'm not aware of any good way to monitor or quanitify how bad snapshot \nrelated debris is accumulating, that's actually something I'd like to add \nmore visibility to one day. About all you can do is note the old \ntransaction in pg_stat_activity and presume it's potential impact \nincreases the longer the transaction is open.\n\nThere are only two good solutions here:\n\n1) Rearchitect the app with the understanding that this problem exists and \nthere's no easy way around it, breaking commits into smaller pieces.\n\n2) Test if an upgrade to PG 8.4 improves your situation. There is some \nnew code in that version (labeled in the release notes as \"Track \ntransaction snapshots more carefully\") that has improved problems in this \narea quite a bit for me. There's a bit more detail about the change at \nhttp://archives.postgresql.org/pgsql-committers/2008-05/msg00220.php , all \nof the other descriptions I found of it require a lot of internals \nknowledge to read.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 1 Oct 2009 06:09:56 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions. " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> 2) Test if an upgrade to PG 8.4 improves your situation. There is some \n> new code in that version (labeled in the release notes as \"Track \n> transaction snapshots more carefully\") that has improved problems in this \n> area quite a bit for me. There's a bit more detail about the change at \n> http://archives.postgresql.org/pgsql-committers/2008-05/msg00220.php , all \n> of the other descriptions I found of it require a lot of internals \n> knowledge to read.\n\nIt's not really that complex. Pre-8.4, VACUUM would always assume that\nevery transaction still needed to be able to access now-dead rows that\nwere live as of the transaction's start. So rows deleted since the\nstart of your oldest transaction couldn't be recycled.\n\nAs of 8.4, the typical case is that an open transaction blocks deletion\nof rows that were deleted since the transaction's current *statement*\nstarted. So this makes a huge difference if you have long-running\ntransactions that consist of a series of not-so-long statements.\nIt also means that transactions that sit \"idle in transaction\" are\nnot a hazard for VACUUM anymore --- an idle transaction doesn't\nblock deletion of anything.\n\nThe hopefully-not-typical cases where we don't do this are:\n\n1. A transaction executing in SERIALIZABLE mode still has the old\nbehavior, because it uses its first snapshot throughout the transaction.\n\n2. DECLARE CURSOR captures a snapshot, so it will block VACUUM as long\nas the cursor is open. (Or at least it's supposed to ... given\ndiscussion yesterday I fear this may be broken in 8.4 :-()\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Oct 2009 11:06:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions. " }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n \n> As of 8.4, the typical case is that an open transaction blocks\n> deletion of rows that were deleted since the transaction's current\n> *statement* started. So this makes a huge difference if you have\n> long-running transactions that consist of a series of not-so-long\n> statements. It also means that transactions that sit \"idle in\n> transaction\" are not a hazard for VACUUM anymore --- an idle\n> transaction doesn't block deletion of anything.\n \nSurely the original version of a row updated or deleted by the\nlong-running transaction must be left until the long-running\ntransaction completes; otherwise, how does ROLLBACK work? (The OP did\nmention that there were a large number of updates and deletes being\nperformed by the long-running transaction....)\n \n-Kevin\n", "msg_date": "Fri, 02 Oct 2009 15:30:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions." }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote: \n>> As of 8.4, the typical case is that an open transaction blocks\n>> deletion of rows that were deleted since the transaction's current\n>> *statement* started.\n\n[ BTW, of course that should have read \"blocks removal of\" ... ]\n \n> Surely the original version of a row updated or deleted by the\n> long-running transaction must be left until the long-running\n> transaction completes; otherwise, how does ROLLBACK work?\n\nRight. What I was talking about was the impact of a long-running\ntransaction on the removal of rows outdated by *other* transactions.\nThe people who hollered loudest about this seemed to often have\nlong-running read-only transactions in parallel with lots of short\nread-write transactions. That's the pattern that 8.4 can help with\nanyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Oct 2009 18:01:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions. " }, { "msg_contents": "On Fri, 2 Oct 2009, Tom Lane wrote:\n\n> The people who hollered loudest about this seemed to often have \n> long-running read-only transactions in parallel with lots of short \n> read-write transactions.\n\nWhich makes sense if you think about it. Long-running read-only reports \nare quite common in DBA land. I'm sure most people can think of an \nexample in businesses they work with that you can't refactor away into \nsmaller chunks, everybody seems to have their own variation on the big \novernight report. Long-running read-write transactions are much less \ncommon, and a bit more likely to break into logical chunks if you \narchitect the design right, using techniques like staging areas for bulk \noperations and write barriers for when they can be applied.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 5 Oct 2009 14:58:43 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed while runnning large transactions. " } ]
[ { "msg_contents": "Hi ,\n\nI have one table my_test table . with on index created on one column .\n\n\nI have turned off the sequential scans .\n\nNow when ever i do refresh on this table or press F5 , It increase the\nsequential scans count and\nSequential tuple read count .\n\nPls help me to understand what exactly is happening ? Is it scanning the\nTable sequentially once i press refresh ?\n\nThanks,\nkeshav\n\nHi , I have one table  my_test table . with on index created on one column . I have turned off the sequential scans . Now when ever i do refresh on this table or press F5 , It increase the sequential scans count and \nSequential tuple read count . Pls help me to understand what exactly is happening ?  Is it scanning the Table sequentially once i press refresh  ?Thanks, keshav", "msg_date": "Thu, 24 Sep 2009 23:02:54 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding Sequential Scans count increase each time we press refresh\n\t." }, { "msg_contents": "Hi ,\n\nI have one table my_test table . with on index created on one column .\n\n\nI have turned off the sequential scans .\n\nNow when ever i do refresh on this table or press F5 , It increase the\nsequential scans count and\nSequential tuple read count .\n\nPls help me to understand what exactly is happening ? Is it scanning the\nTable sequentially once i press refresh ?\n\nThanks,\nkeshav\n\n\n\n\n-- \nThanks,\nKeshav Upadhyaya\n\nHi , I have one table  my_test table . with on index created on one column . I have turned off the sequential scans . Now when ever i do refresh on this table or press F5 , It increase the sequential scans count and \n\nSequential tuple read count . Pls help me to understand what exactly is happening ?  Is it scanning the Table sequentially once i press refresh  ?Thanks, keshav\n-- Thanks,Keshav Upadhyaya", "msg_date": "Thu, 24 Sep 2009 23:11:15 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding Sequential Scans count increase each time we press refresh\n\t." }, { "msg_contents": "Keshav,\n\n> I have one table my_test table . with on index created on one column .\n> \n> \n> I have turned off the sequential scans .\n> \n> Now when ever i do refresh on this table or press F5 , It increase the\n> sequential scans count and\n> Sequential tuple read count .\n\nWhat's a \"refresh\"?\n\nYou can't \"turn off\" sequential scans. You can only make the planner\nless likely to choose them. But if there's no way to get the data you\nneed other than a seqscan, it's still going to do one.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 24 Sep 2009 17:25:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Sequential Scans count increase each time\n\twe press refresh ." }, { "msg_contents": "On Thu, Sep 24, 2009 at 1:41 PM, keshav upadhyaya <[email protected]> wrote:\n> I have one table  my_test table . with on index created on one column .\n>\n>\n> I have turned off the sequential scans .\n>\n> Now when ever i do refresh on this table or press F5 , It increase the\n> sequential scans count and\n> Sequential tuple read count .\n>\n> Pls help me to understand what exactly is happening ?  Is it scanning the\n> Table sequentially once i press refresh  ?\n\nAssuming by \"turned off the sequential scans\", you mean that you've set the\nconfig parameter enable_seqscan=off , note that the documentation says\n\"It's not possible to suppress sequential scans entirely, but turning\nthis variable\noff discourages the planner from using one if there are other methods\navailable.\"\n\nhttp://www.postgresql.org/docs/current/static/runtime-config-query.html\n\nIt sounds like you're accessing your Postgres database through something like\nphpPgAdmin or a similar web interface, and you're running a query like:\n SELECT * FROM mytable\n\nA query like this is going to use a sequential scan, regardless of the setting\nof enable_seqscan.\n\nJosh\n", "msg_date": "Fri, 25 Sep 2009 17:22:41 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Sequential Scans count increase each time we\n\tpress refresh ." }, { "msg_contents": "On Thu, Sep 24, 2009 at 8:25 PM, Josh Berkus <[email protected]> wrote:\n> You can't \"turn off\" sequential scans.  You can only make the planner\n> less likely to choose them.  But if there's no way to get the data you\n> need other than a seqscan, it's still going to do one.\n\nAnd that's not a bad thing. For a very small table, it's often the\nfastest method.\n\n...Robert\n", "msg_date": "Sun, 27 Sep 2009 14:38:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Sequential Scans count increase each time we\n\tpress refresh ." }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Sep 24, 2009 at 8:25 PM, Josh Berkus <[email protected]> wrote:\n>> You can't \"turn off\" sequential scans. �You can only make the planner\n>> less likely to choose them. �But if there's no way to get the data you\n>> need other than a seqscan, it's still going to do one.\n\n> And that's not a bad thing. For a very small table, it's often the\n> fastest method.\n\nProbably more to the point: if the query involves fetching the whole\ntable, it's *always* the fastest method. (Except maybe if you want\nthe results sorted, and often it's the fastest way even so.) Indexes\nare not a panacea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 27 Sep 2009 15:17:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Sequential Scans count increase each time we press\n\trefresh ." } ]
[ { "msg_contents": "Is there any practical limit to the number of shared buffers PG 8.3.7 \ncan handle before more becomes counter-productive? I remember the \nbuffer management algorithm used to get unhappy with too many buffers \nand past a certain point performance dropped with extra memory \npitched at Postgres.\n\nMy production DB's around 200G, and the box hosting it has 192G of \nmemory on it, running a 64 bit AIX build of 8.3.7. I'm currently \nrunning with 15G of shared buffers, and while performance is pretty \ngood, things still hit the disk more than I'd like. I can easily bump \nthe shared buffer setting up to 40G or so without affecting anything \nelse that matters.\n\nThe box runs other things as well as the database, so the OS buffer \ncache tends to get effectively flushed -- permanently pinning more of \nthe database in memory would be an overall win for DB performance, \nassuming bad things don't happen because of buffer management. \n(Unfortunately I've only got a limited window to bounce the server, \nso I can't do too much in the way of experimentation with buffer \nsizing)\n-- \n\t\t\t\tDan\n\n--------------------------------------it's like this-------------------\nDan Sugalski even samurai\[email protected] have teddy bears and even\n teddy bears get drunk\n", "msg_date": "Thu, 24 Sep 2009 23:21:55 -0400", "msg_from": "Dan Sugalski <[email protected]>", "msg_from_op": true, "msg_subject": "PG 8.3 and large shared buffer settings" }, { "msg_contents": "Dan Sugalski <[email protected]> writes:\n> Is there any practical limit to the number of shared buffers PG 8.3.7 \n> can handle before more becomes counter-productive?\n\nProbably, but I've not heard any definitive measurements showing an\nupper limit. The traditional wisdom of limiting it to 1G or so dates\nfrom before the last rounds of revisions to the bufmgr logic.\n\n> My production DB's around 200G, and the box hosting it has 192G of \n> memory on it, running a 64 bit AIX build of 8.3.7.\n\nYowza. You might be able to do measurements that no one has done\nbefore. Let us know what you find out.\n\nBTW, does AIX have any provision for locking shared memory into RAM?\nOne of the gotchas for large shared memory has always been the risk\nthat the kernel would decide to swap some of it out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 25 Sep 2009 00:36:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings " }, { "msg_contents": "At 12:36 AM -0400 9/25/09, Tom Lane wrote:\n>Dan Sugalski <[email protected]> writes:\n>> Is there any practical limit to the number of shared buffers PG 8.3.7\n>> can handle before more becomes counter-productive?\n>\n>Probably, but I've not heard any definitive measurements showing an\n>upper limit. The traditional wisdom of limiting it to 1G or so dates\n>from before the last rounds of revisions to the bufmgr logic.\n\nExcellent.\n\n> > My production DB's around 200G, and the box hosting it has 192G of\n>> memory on it, running a 64 bit AIX build of 8.3.7.\n>\n>Yowza. You might be able to do measurements that no one has done\n>before. Let us know what you find out.\n\n:) It's a machine of non-trivial size, to be sure. I'll give the \nbuffer setting a good bump and see how it goes. I may be able to take \none of the slony replicas off-line the next holiday and run some \nperformance tests, but that won't be for a while.\n\n>BTW, does AIX have any provision for locking shared memory into RAM?\n>One of the gotchas for large shared memory has always been the risk\n>that the kernel would decide to swap some of it out.\n\nI'll have to go check, but I think it does. This box hasn't actually \nhit swap since it started -- a good chunk of that RAM is used as \nsemi-permanent disk cache but unfortunately the regular day-to-day \nuse of this box (they won't let me have it as a dedicated DB-only \nmachine. Go figure :) doing other stuff the cache tends to turn over \npretty quickly.\n-- \n\t\t\t\tDan\n\n--------------------------------------it's like this-------------------\nDan Sugalski even samurai\[email protected] have teddy bears and even\n teddy bears get drunk\n", "msg_date": "Fri, 25 Sep 2009 06:05:38 -0400", "msg_from": "Dan Sugalski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "* Dan Sugalski <[email protected]> [090925 06:06]:\n\n> I'll have to go check, but I think it does. This box hasn't actually hit \n> swap since it started -- a good chunk of that RAM is used as \n> semi-permanent disk cache but unfortunately the regular day-to-day use of \n> this box (they won't let me have it as a dedicated DB-only machine. Go \n> figure :) doing other stuff the cache tends to turn over pretty quickly.\n\nAll the more reason to find a way to use it all as shared buffers and\nlock it into ram...\n\nOh, sorry, you expect the DB to play nice with everything else?\n\n;-)\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Fri, 25 Sep 2009 09:33:09 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "That won't work well anyway because the postgres shared_buffers dos not cache things that are sequentially scanned (it uses a ring buffer for each scan). So, for any data that is only accessed by sequential scan, you're relying on the OS and the disks. If you access a table via index scan though, all its pages will go through shared_buffers.\n\nSize shared_buffers to no more than the 'hot' space of index and randomly accessed data.\n\n________________________________________\nFrom: [email protected] [[email protected]] On Behalf Of Aidan Van Dyk [[email protected]]\nSent: Friday, September 25, 2009 6:33 AM\nTo: Dan Sugalski\nCc: Tom Lane; [email protected]\nSubject: Re: [PERFORM] PG 8.3 and large shared buffer settings\n\n* Dan Sugalski <[email protected]> [090925 06:06]:\n\n> I'll have to go check, but I think it does. This box hasn't actually hit\n> swap since it started -- a good chunk of that RAM is used as\n> semi-permanent disk cache but unfortunately the regular day-to-day use of\n> this box (they won't let me have it as a dedicated DB-only machine. Go\n> figure :) doing other stuff the cache tends to turn over pretty quickly.\n\nAll the more reason to find a way to use it all as shared buffers and\nlock it into ram...\n\nOh, sorry, you expect the DB to play nice with everything else?\n\n;-)\n\na.\n\n--\nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Fri, 25 Sep 2009 08:53:00 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "* Scott Carey <[email protected]> [090925 11:57]:\n> That won't work well anyway because the postgres shared_buffers dos not cache things that are sequentially scanned (it uses a ring buffer for each scan). So, for any data that is only accessed by sequential scan, you're relying on the OS and the disks. If you access a table via index scan though, all its pages will go through shared_buffers.\n\nIn older version too, or only since synchronized scans got in?\n\na.\n\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Fri, 25 Sep 2009 12:15:42 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Fri, Sep 25, 2009 at 8:53 AM, Scott Carey <[email protected]> wrote:\n> That won't work well anyway because the postgres shared_buffers dos not cache\n> things that are sequentially scanned (it uses a ring buffer for each scan).  So, for\n> any data that is only accessed by sequential scan, you're relying on the OS and\n> the disks.  If you access a table via index scan though, all its pages will go through\n> shared_buffers.\n\nDoes it doe this even if the block was already in shared_buffers?\nThat seems like a serious no-no to me to read the same block into\ndifferent buffers. I thought that the sequential scan would have to\nbreak stride when it encountered a block already in buffer. But I\nhaven't looked at the code, maybe I am over analogizing to other\nsoftware I'm familiar with.\n\nJeff\n", "msg_date": "Fri, 25 Sep 2009 19:53:56 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Fri, 25 Sep 2009, Jeff Janes wrote:\n\n> Does it do this even if the block was already in shared_buffers?\n\nUsually not. The buffer ring algorithm is used to manage pages that are \nread in specifically to satisfy a sequential scan (there's a slightly \ndifferent ring method used for VACUUM too). If the buffer you need is \nalready available and not \"pinned\" (locked by someone else), it's not read \nfrom disk again. Instead, its usage count is incremently only if it's at \nzero (this doesn't count as a use unless it's about to be evicted as \nunused), and it's returned without being added to the ring.\n\nThere's a section about this (\"Buffer Ring Replacement Strategy\") in the \nsource code: \nhttp://git.postgresql.org/gitweb?p=postgresql.git;a=blob_plain;f=src/backend/storage/buffer/README;hb=HEAD\n\nThe commit that added the feature is at \nhttp://git.postgresql.org/gitweb?p=postgresql.git;a=commit;h=ebf3d5b66360823edbdf5ac4f9a119506fccd4c0\n\nThe basic flow of this code is that backends ask for buffers using \nBufferAlloc, which then calls StrategyGetBuffer (where the ring list is \nmanaged) only if it doesn't first find the page in the buffer cache. You \nget what you'd hope for here: a sequential scan will use blocks when \nthey're already available in the cache, while reading in less popular \nblocks that weren't cached into the temporary ring area. There's always \nthe OS cache backing the PostrgreSQL one to handle cases where the working \nset you're using is just a bit larger than shared_buffers. The ring read \nrequests may very well be satisfied by that too if there was a recent \nsequential scan the OS is still caching.\n\nYou can read a high-level summary of the algorithm used for ring \nmanagement (with an intro to buffer management in general) in my \"Inside \nthe PostgreSQL Buffer Cache\" presentation at \nhttp://www.westnet.com/~gsmith/content/postgresql/ on P10 \"Optimizations \nfor problem areas\". That doesn't specifically cover the \"what if it's in \nthe cache already?\" case though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 26 Sep 2009 10:59:00 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Thu, 24 Sep 2009, Dan Sugalski wrote:\n\n> Is there any practical limit to the number of shared buffers PG 8.3.7 can \n> handle before more becomes counter-productive?\n\nThere are actually two distinct questions here you should consider, \nbecause the popular wisdom here and what makes sense for your case might \nbe different.\n\nThe biggest shared_buffers tests I've seen come from Sun, where Jignesh \nthere saw around 10GB was the largest amount of RAM you could give to the \ndatabase before it stopped improving performance. As you guessed, there \nis a certain amount of overhead to managing the buffers involved, and as \nthe size grows the chance you'll run into locking issues or similar \nresource contention grows too.\n\nAnother problem spot are checkpoints. If you dirty a very large buffer \ncache, that whole thing will have to get dumped to disk eventually, and on \nsome workloads people have found they have to reduce shared_buffers \nspecifically to keep this from being too painful.\n\nThat's not answering your question though; what it answers is \"how large \ncan shared_buffers get before it's counterproductive compared with giving \nthe memory to OS to manage?\"\n\nThe basic design of PostgreSQL presumes that the OS buffer cache exists as \na second-chance source for cached buffers. The OS cache tends to be \noptimized to handle large numbers of buffers well, but without very much \nmemory about what's been used recently to optimize allocations and \nevictions. The symmetry there is one reason behind why shared_buffers \nshouldn't be most of the RAM on your system; splitting things up so that \nPG has a cut and the OS has at least as large of its own space lets the \ntwo cache management schemes complement each other.\n\n> The box runs other things as well as the database, so the OS buffer cache \n> tends to get effectively flushed -- permanently pinning more of the database \n> in memory would be an overall win for DB performance, assuming bad things \n> don't happen because of buffer management.\n\nThis means that the question you want an answer to is \"if the OS cache \nisn't really available, where does giving memory to shared_buffers becomes \nless efficient than not caching things at all?\" My guess is that this \nnumber is much larger than 10GB, but I don't think anyone has done any \ntests to try to quantify exactly where it is. Typically when people are \ntalking about systems as large as yours, they're dedicated database \nservers at that point, so the OS cache gets considered at the same time. \nIf it's effectively out of the picture, the spot where caching still helps \neven when it's somewhat inefficient due to buffer contention isn't well \nexplored.\n\nIt would depend on the app too. If you're heavily balanced toward reads \nthat don't need locks, you can certainly support a larger shared_buffers \nthan someone who is writing a lot (just the checkpoint impact alone makes \nthis true, and there's other sources for locking contention).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 26 Sep 2009 11:19:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Sat, 26 Sep 2009, Greg Smith wrote:\n\n> On Fri, 25 Sep 2009, Jeff Janes wrote:\n>\n>> Does it do this even if the block was already in shared_buffers?\n>\n> Usually not. The buffer ring algorithm is used to manage pages that are read \n> in specifically to satisfy a sequential scan (there's a slightly different \n> ring method used for VACUUM too). If the buffer you need is already \n> available and not \"pinned\" (locked by someone else), it's not read from disk \n> again. Instead, its usage count is incremently only if it's at zero (this \n> doesn't count as a use unless it's about to be evicted as unused), and it's \n> returned without being added to the ring.\n>\n\nHello Greg,\n\nWhat happens when a postmaster dies (e.g. core dump, kill -9, etc.). How \nis reference counting cleaned up and the lock removed?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n", "msg_date": "Sat, 26 Sep 2009 18:57:35 +0200 (CEST)", "msg_from": "Gerhard Wiesinger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "\n>> Is there any practical limit to the number of shared buffers PG 8.3.7 \n>> can handle before more becomes counter-productive?\n\nIt is more efficient to have the page in shared buffers, rather than doing \na context switch to the OS, copying the entire page from the OS's cache \ninto shared buffers, and coming back to postgres. Shared buffers use less \nCPU. However, this is totally negligible versus the disk wait time of an \nuncached IO.\n\nThe same page may be cached once in shared_buffers, and once in the OS \ncache, so if your shared buffers is half your RAM, and the other half is \ndisk cache, perhaps it won't be optimal: is stuff is cached twice, you can \ncache half as much stuff.\n\nIf your entire database can fit in shared buffers, good for you though. \nBut then a checkpoint comes, and postgres will write all dirty buffers to \ndisk in the order it finds them in Shared Buffers, which might be totally \ndifferent from the on-disk order. If you have enough OS cache left to \nabsorb these writes, the OS will reorder them. If not, lots of random \nwrites are going to occur. On a RAID5 this can be a lot of fun.\n", "msg_date": "Sat, 26 Sep 2009 19:24:16 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Sat, Sep 26, 2009 at 9:57 AM, Gerhard Wiesinger <[email protected]> wrote:\n> On Sat, 26 Sep 2009, Greg Smith wrote:\n>\n>> On Fri, 25 Sep 2009, Jeff Janes wrote:\n>>\n>>> Does it do this even if the block was already in shared_buffers?\n>>\n>> Usually not. The buffer ring algorithm is used to manage pages that are\n>> read in specifically to satisfy a sequential scan (there's a slightly\n>> different ring method used for VACUUM too). If the buffer you need is\n>> already available and not \"pinned\" (locked by someone else), it's not read\n>> from disk again. Instead, its usage count is incremently only if it's at\n>> zero (this doesn't count as a use unless it's about to be evicted as\n>> unused), and it's returned without being added to the ring.\n>>\n>\n> Hello Greg,\n>\n> What happens when a postmaster dies (e.g. core dump, kill -9, etc.). How is\n> reference counting cleaned up and the lock removed?\n\nIf a backend dies in disgrace, the master detects this and the whole\ncluster is taken down and brought back up.\n\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\n\n(The DETAIL is technically accurate, but somewhat misleading. If the\ncrash to another backend happens while your backend is waiting on the\ncommit record WAL fsync to return, then while the postmaster may have\ncommanded your session to rollback, it is too late to actually do so\nand when the server comes back up and finishes recovery, you will\nprobably find that your transaction has indeed committed, assuming you\nhave some way to accurately deduce this)\n\n\nJeff\n", "msg_date": "Sat, 26 Sep 2009 11:59:43 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Sat, Sep 26, 2009 at 8:19 AM, Greg Smith <[email protected]> wrote:\n>\n> Another problem spot are checkpoints. If you dirty a very large buffer\n> cache, that whole thing will have to get dumped to disk eventually, and on\n> some workloads people have found they have to reduce shared_buffers\n> specifically to keep this from being too painful.\n\nHi Greg,\n\nIs this the case even if checkpoint_completion_target is set close to 1.0?\n\nIf you dirty buffers fast enough to dirty most of a huge\nshared_buffers area between checkpoints, then it seems like lowering\nthe shared_buffers wouldn't reduce the amount of I/O needed, it would\njust shift the I/O from checkpoints to the backends themselves.\n\nIt looks like checkpoint_completion_target was introduced in 8.3.0\n\nCheers,\n\nJeff\n", "msg_date": "Sat, 26 Sep 2009 12:16:52 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Sat, 26 Sep 2009, Jeff Janes wrote:\n\n> On Sat, Sep 26, 2009 at 8:19 AM, Greg Smith <[email protected]> wrote:\n>>\n>> Another problem spot are checkpoints. If you dirty a very large buffer\n>> cache, that whole thing will have to get dumped to disk eventually, and on\n>> some workloads people have found they have to reduce shared_buffers\n>> specifically to keep this from being too painful.\n>\n> Is this the case even if checkpoint_completion_target is set close to 1.0?\n\nSure. checkpoint_completion_target aims to utilize more of the space \nbetween each checkpoint by spreading them out over more of that space, but \nit alone doesn't change the fact that checkpoints are only so long. By \ndefault, you're going to get one every five minutes, and on active systems \nthey can come every few seconds if you're not aggressive with increasing \ncheckpoint_segments.\n\nSome quick math gives an idea of the scale of the problem. A single cheap \ndisk can write random I/O (which checkpoints writes often are) at 1-2MB/s; \nlet's call it 100MB/minute. That means that in 5 minutes, a single disk \nsystem might be hard pressed to write even 500MB of data out. But you can \neasily dirty 500MB in seconds nowadays. Now imagine shared_buffers is \n40GB and you've dirtied a bunch of it; how long will that take to clear \neven on a fast RAID system? It won't be quick, and the whole system will \ngrind to a halt at the end of the checkpoint as all the buffered writes \nqueued up are forced out.\n\n> If you dirty buffers fast enough to dirty most of a huge shared_buffers \n> area between checkpoints, then it seems like lowering the shared_buffers \n> wouldn't reduce the amount of I/O needed, it would just shift the I/O \n> from checkpoints to the backends themselves.\n\nWhat's even worse is that backends can be writing data and filling the OS \nbuffer cache in between checkpoints too, but all of that is forced to \ncomplete before the checkpoint can finish too. You can easily start the \ncheckpoint process with the whole OS cache filled with backend writes that \nwill slow checkpoint ones if you're not careful.\n\nBecause disks are slow, you need to get things that are written to disk as \nsoon as feasible, so the OS has more time to work on them, reorder for \nefficient writing, etc.\n\nUltimately, the sooner you get I/O to the OS cache to write, the better, \n*unless* you're going to write that same block over again before it must \ngo to disk. Normally you want buffers that aren't accessed often to get \nwritten out to disk early rather than linger until checkpoint time, \nthere's nothing wrong with a backend doing a write if that block wasn't \ngoing to be used again soon. The ideal setup from a latency perspective is \nthat you size shared_buffers just large enough to hold the things you \nwrite to regularly, but not so big that it caches every write.\n\n> It looks like checkpoint_completion_target was introduced in 8.3.0\n\nCorrect. Before then, you had no hope for reducing checkpoint overhead \nbut to use very small settings for shared_buffers, particularly if you \ncranked the old background writer up so that it wrote lots of redundant \ninformation too (that's was the main result of \"tuning\" it on versions \nbefore 8.3 as well).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 26 Sep 2009 17:19:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On 9/26/09 8:19 AM, Greg Smith wrote:\n> This means that the question you want an answer to is \"if the OS cache\n> isn't really available, where does giving memory to shared_buffers\n> becomes less efficient than not caching things at all?\" My guess is\n> that this number is much larger than 10GB, but I don't think anyone has\n> done any tests to try to quantify exactly where it is. Typically when\n> people are talking about systems as large as yours, they're dedicated\n> database servers at that point, so the OS cache gets considered at the\n> same time. If it's effectively out of the picture, the spot where\n> caching still helps even when it's somewhat inefficient due to buffer\n> contention isn't well explored.\n\nIt also depends on the filesystem. In testing at Sun and on this list,\npeople have found that very large s_b (60% of RAM) plus directIO was\nactually a win on Solaris UFS, partly because UFS isn't very agressive\nor smart about readahead and caching. On Linux/Ext3, however, it was\nnever a win.\n\nI don't know what AIX's filesystems are like.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Mon, 28 Sep 2009 10:36:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" }, { "msg_contents": "On Mon, 2009-09-28 at 10:36 -0700, Josh Berkus wrote:\n> On 9/26/09 8:19 AM, Greg Smith wrote:\n> > This means that the question you want an answer to is \"if the OS cache\n> > isn't really available, where does giving memory to shared_buffers\n> > becomes less efficient than not caching things at all?\" My guess is\n> > that this number is much larger than 10GB, but I don't think anyone has\n> > done any tests to try to quantify exactly where it is. Typically when\n> > people are talking about systems as large as yours, they're dedicated\n> > database servers at that point, so the OS cache gets considered at the\n> > same time. If it's effectively out of the picture, the spot where\n> > caching still helps even when it's somewhat inefficient due to buffer\n> > contention isn't well explored.\n> \n> It also depends on the filesystem. In testing at Sun and on this list,\n> people have found that very large s_b (60% of RAM) plus directIO was\n> actually a win on Solaris UFS, partly because UFS isn't very agressive\n> or smart about readahead and caching. On Linux/Ext3, however, it was\n> never a win.\n\nAgain, it depends. \n\nOn my recent testing of a simple seqscan on 1-int table, that are\nentirely in cache (either syscache or shared buffers), the shared\nbuffers only scan was 6% to 10% percent faster than when the relation\nwas entirely in system cache and each page had to be switched in via\nsyscall / context switch. \n\nThis was on Linux/Ext3 but I suspect this to be mostly independent of\nfile system.\n\nAlso, in ancient times, when I used Slony, and an early version of Slony\nat that, which did not drop and recreate indexes around initial copy,\nthe copy time could be 2 to 3 _times_ slower for large tables with lots\nof indexes when indexes were in system cache vs. when they were in\nshared buffers (if I remember correctly, it was 1G shared buffers vs. 3G\non a 4G machine). It was probably due to all kinds of index page splits\netc which shuffled index pages back and forth a lot between userspace\nand syscache. So this is not entirely read-only thing either.\n\n-- \nHannu Krosing http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability \n Services, Consulting and Training\n\n\n", "msg_date": "Mon, 28 Sep 2009 21:00:10 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and large shared buffer settings" } ]
[ { "msg_contents": "Hi,\nI have a big performance problem in my SQL select query:\n\n========================================\nselect * from event where user_id in\n(500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,\n400,399,398,397,396,395,394,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,\n300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,\n200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,\n100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);\n========================================\n\nThe above SELECT always spends 1200ms.\n\nThe EXPLAIN ANLYSE result of it is :\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on event (cost=73685.08..5983063.49 rows=662018\nwidth=36) (actual time=24.857..242.826 rows=134289 loops=1)\n Recheck Cond: (user_id = ANY\n('{499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451\n,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,400,399,398,397,396,395,394,3\n93,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336\n,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,2\n78,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221\n,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,1\n63,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106\n,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,3\n1,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0}'::integer[]))\n -> Bitmap Index Scan on event_user_id_idx (cost=0.00..71699.03\nrows=662018 width=0) (actual time=24.610..24.610 rows=134289 loops=1)\n Index Cond: (user_id = ANY\n('{499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452\n,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,400,399,398,397,396,395,3\n94,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337\n,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,2\n79,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222\n,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,1\n64,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107\n,106,105,104,103,102,101,100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,\n32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0}'::integer[]))\n\n--------------------------------------------------------------------------------------------------------\n\n\n\nMy table's structure is :\n=====================\nCREATE TABLE event (\n id integer NOT NULL,\n user_id integer NOT NULL,\n action_type integer NOT NULL,\n resource_type integer NOT NULL,\n resource_sn integer NOT NULL,\n result_type integer,\n result_sn integer,\n created_date timestamp with time zone NOT NULL\n);\n=====================\n\nAnd the table event has more than 100,000,000 rows, and I have a btree\nindex, event_user_id_idx, on user_id, the index size is 2171MB.\n\nDo anyone have good ideas to optimize this query?\n\nThanks very much.\n-- \n夏清然\nXia Qingran\[email protected]\nSent from Beijing, 11, China\nCharles de Gaulle - \"The better I get to know men, the more I find\nmyself loving dogs.\" -\nhttp://www.brainyquote.com/quotes/authors/c/charles_de_gaulle.html\n", "msg_date": "Sat, 26 Sep 2009 21:05:28 +0800", "msg_from": "Xia Qingran <[email protected]>", "msg_from_op": true, "msg_subject": "Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "> I have a big performance problem in my SQL select query:\n>\n> ========================================\n> select * from event where user_id in\n> (500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,\n> 400,399,398,397,396,395,394,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,\n> 300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,\n> 200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,\n> 100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);\n> ========================================\n\nWhat happens if you change the query to\n\nselect * from event where user_id >= 0 and user_id <= 500;\n\n? :-)\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Sat, 26 Sep 2009 16:16:20 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "\nOn 26-Sep-2009, at 10:16 PM, Claus Guttesen wrote:\n\n>> I have a big performance problem in my SQL select query:\n>>\n>> ========================================\n>> select * from event where user_id in\n>> (500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401\n>> ,\n>> 400,399,398,397,396,395,394,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301\n>> ,\n>> 300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201\n>> ,\n>> 200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101\n>> ,\n>> 100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0\n>> );\n>> ========================================\n>\n> What happens if you change the query to\n>\n> select * from event where user_id >= 0 and user_id <= 500;\n\nor select * from event where user_id <= 500; :)\n\nBesides, your index seem quite huge >2G, and it usually takes some \ntime to process the result, even though it's already indexed with btree.\n\n\n>\n> ? :-)\n>\n> -- \n> regards\n> Claus\n>\n> When lenity and cruelty play for a kingdom,\n> the gentler gamester is the soonest winner.\n>\n> Shakespeare\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sat, 26 Sep 2009 22:53:53 +0800", "msg_from": "Paul Ooi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "Xia Qingran wrote:\n> Hi,\n> I have a big performance problem in my SQL select query:\n> \n> ========================================\n> select * from event where user_id in\n> (500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,447,446,445,444,443,442,441,440,439,438,437,436,435,434,433,432,431,430,429,428,427,426,425,424,423,422,421,420,419,418,417,416,415,414,413,412,411,410,409,408,407,406,405,404,403,402,401,\n> 400,399,398,397,396,395,394,393,392,391,390,389,388,387,386,385,384,383,382,381,380,379,378,377,376,375,374,373,372,371,370,369,368,367,366,365,364,363,362,361,360,359,358,357,356,355,354,353,352,351,350,349,348,347,346,345,344,343,342,341,340,339,338,337,336,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,310,309,308,307,306,305,304,303,302,301,\n> 300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,201,\n> 200,199,198,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,170,169,168,167,166,165,164,163,162,161,160,159,158,157,156,155,154,153,152,151,150,149,148,147,146,145,144,143,142,141,140,139,138,137,136,135,134,133,132,131,130,129,128,127,126,125,124,123,122,121,120,119,118,117,116,115,114,113,112,111,110,109,108,107,106,105,104,103,102,101,\n> 100,99,98,97,96,95,94,93,92,91,90,89,88,87,86,85,84,83,82,81,80,79,78,77,76,75,74,73,72,71,70,69,68,67,66,65,64,63,62,61,60,59,58,57,56,55,54,53,52,51,50,49,48,47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);\n> ========================================\n> \n> The above SELECT always spends 1200ms.\n\nIf your user_id is always in a narrow range like this, or even in any range that is a small fraction of the total, then add a range condition, like this:\n\nselect * from event where user_id <= 500 and user_id >= 0 and user_id in (...)\n\nI did this exact same thing in my application and it worked well.\n\nCraig\n", "msg_date": "Sat, 26 Sep 2009 07:59:50 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "Xia Qingran <[email protected]> writes:\n> I have a big performance problem in my SQL select query:\n> select * from event where user_id in\n> (500,499,498, ... ,1,0);\n> The above SELECT always spends 1200ms.\n\nYour EXPLAIN ANALYZE shows that the actual runtime is only about 240ms.\nSo either the planning time is about 1000ms, or transmitting and\ndisplaying the 134K rows produced by the query takes that long, or some\ncombination of the two. I wouldn't be too surprised if it's the data\ndisplay that's slow; but if it's the planning time that you're unhappy\nabout, updating to a more recent PG release might possibly help. What\nversion is this anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Sep 2009 13:03:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...) " }, { "msg_contents": "if you reuse that set a lot, how about storing it in a table , and doing the\njoin on db side ? if it is large, it sometimes makes sense to create temp\ntable just for single query (I use that sort of stuff for comparing with few\nM records).\nBut temp tables in that case have to be short lived, as they can't reuse\nspace (no FSM in temporary table world I'm afraid, I hope it will be fixed\nat some stage tho).\n\nif you reuse that set a lot, how about storing it in a table , and doing the join on db side ? if it is large, it sometimes makes sense to create temp table just for single query (I use that sort of stuff for comparing with few M records). \nBut temp tables in that case have to be short lived, as they can't reuse space (no FSM in temporary table world I'm afraid, I hope it will be fixed at some stage tho).", "msg_date": "Sat, 26 Sep 2009 18:58:35 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "On Sun, Sep 27, 2009 at 1:03 AM, Tom Lane <[email protected]> wrote:\n> Xia Qingran <[email protected]> writes:\n>> I have a big performance problem in my SQL select query:\n>> select * from event where user_id in\n>> (500,499,498, ... ,1,0);\n>> The above SELECT always spends 1200ms.\n>\n> Your EXPLAIN ANALYZE shows that the actual runtime is only about 240ms.\n> So either the planning time is about 1000ms, or transmitting and\n> displaying the 134K rows produced by the query takes that long, or some\n> combination of the two.  I wouldn't be too surprised if it's the data\n> display that's slow; but if it's the planning time that you're unhappy\n> about, updating to a more recent PG release might possibly help.  What\n> version is this anyway?\n>\n>                        regards, tom lane\n\nOh, It is a problem.\n\nForgot to talk about my platform. I am running PostgreSQL 8.4.0 on\nFreeBSD 7.2-amd64 box, which has dual Xeon 5410 CPUs, 8GB memory and 2\nSATA disks.\n\nAnd my postgresql.conf is listed as follow:\n---------------------------------------------------------------------------------------\n\nlisten_addresses = '*'\t\t# what IP address(es) to listen on;\nport = 5432\t\t\t\t# (change requires restart)\nmax_connections = 88\t\t\t# (change requires restart)\nsuperuser_reserved_connections = 3\nssl = off\t\t\t\t# (change requires restart)\ntcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\ntcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\ntcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\nshared_buffers = 2048MB\t\t\t# min 128kB or max_connections*16kB\ntemp_buffers = 32MB\t\t\t# min 800kB\nmax_prepared_transactions = 150\t\t# can be 0 or more, 0 to shutdown the\nprepared transactions.\nwork_mem = 8MB\t\t\t\t# min 64kB\nmaintenance_work_mem = 1024MB\t\t# min 1MB\nmax_stack_depth = 8MB\t\t\t# min 100kB\nmax_files_per_process = 16384\t\t# min 25\nvacuum_cost_delay = 100\t\t\t# 0-1000 milliseconds\nvacuum_cost_page_hit = 1\t\t# 0-10000 credits\nvacuum_cost_page_miss = 10\t\t# 0-10000 credits\nvacuum_cost_page_dirty = 20\t\t# 0-10000 credits\nvacuum_cost_limit = 500\t\t# 1-10000 credits\nbgwriter_delay = 500ms\t\t\t# 10-10000ms between rounds\nbgwriter_lru_maxpages = 100\t\t# 0-1000 max buffers written/round\nbgwriter_lru_multiplier = 2.0\t\t# 0-10.0 multipler on buffers scanned/round\nfsync = off\t\t\t\t# turns forced synchronization on or off\nsynchronous_commit = off\t\t# immediate fsync at commit\nwal_sync_method = fsync\t\t# the default is the first option\nfull_page_writes = off\t\t\t# recover from partial page writes\nwal_buffers = 2MB\t\t\t# min 32kB\nwal_writer_delay = 200ms\t\t# 1-10000 milliseconds\ncommit_delay = 50\t\t\t# range 0-100000, in microseconds\ncommit_siblings = 5\t\t\t# range 1-1000\ncheckpoint_segments = 32\t\t# in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 6min\t\t# range 30s-1h\ncheckpoint_completion_target = 0.5\t# checkpoint target duration, 0.0 - 1.0\ncheckpoint_warning = 30s\t\t# 0 is off\nenable_bitmapscan = on\nenable_hashagg = on\nenable_hashjoin = on\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = on\nenable_seqscan = on\nenable_sort = on\nenable_tidscan = on\nseq_page_cost = 1.8\t\t\t# measured on an arbitrary scale\nrandom_page_cost = 2\t\t\t# same scale as above\ncpu_tuple_cost = 0.15\t\t\t# same scale as above\ncpu_index_tuple_cost = 0.07\t\t# same scale as above\ncpu_operator_cost = 0.03\t\t# same scale as above\neffective_cache_size = 3072MB\ngeqo = on\ngeqo_threshold = 20\ngeqo_effort = 7 \t\t# range 1-10\ngeqo_pool_size = 0\t\t\t# selects default based on effort\ngeqo_generations = 0\t\t\t# selects default based on effort\ngeqo_selection_bias = 2.0\t\t# range 1.5-2.0\ndefault_statistics_target = 500\t\t# range 1-1000\nconstraint_exclusion = partition\nfrom_collapse_limit = 20\njoin_collapse_limit = 20\t\t# 1 disables collapsing of explicit\nlog_destination = 'syslog'\nsyslog_facility = 'LOCAL2'\nsyslog_ident = 'postgres'\nclient_min_messages = notice\t\t# values in order of decreasing detail:\nlog_min_messages = error\t\t# values in order of decreasing detail:\nlog_error_verbosity = terse\t\t# terse, default, or verbose messages\nlog_min_error_statement = panic\t# values in order of decreasing detail:\nlog_min_duration_statement = -1\t# -1 is disabled, 0 logs all statements\nsilent_mode = on\ndebug_print_parse = off\ndebug_print_rewritten = off\ndebug_print_plan = off\ndebug_pretty_print = off\nlog_checkpoints = off\nlog_connections = off\nlog_disconnections = off\nlog_duration = on\nlog_hostname = off\nlog_line_prefix = ''\t\t\t# special values:\nlog_lock_waits = off\t\t\t# log lock waits >= deadlock_timeout\nlog_statement = 'none'\t\t\t# none, ddl, mod, all\nlog_temp_files = -1\t\t\t# log temporary files equal or larger\ntrack_activities = on\ntrack_counts = on\nupdate_process_title = off\nlog_parser_stats = off\nlog_planner_stats = off\nlog_executor_stats = off\nlog_statement_stats = off\nautovacuum = on\t\t\t# Enable autovacuum subprocess? 'on'\nlog_autovacuum_min_duration = 10\t# -1 disables, 0 logs all actions and\nautovacuum_max_workers = 3\t\t# max number of autovacuum subprocesses\nautovacuum_naptime = 10min\t\t# time between autovacuum runs\nautovacuum_vacuum_threshold = 100\t# min number of row updates before\nautovacuum_analyze_threshold = 50\t# min number of row updates before\nautovacuum_vacuum_scale_factor = 0.2\t# fraction of table size before vacuum\nautovacuum_analyze_scale_factor = 0.1\t# fraction of table size before analyze\nautovacuum_freeze_max_age = 200000000\t# maximum XID age before forced vacuum\nautovacuum_vacuum_cost_delay = 30\t# default vacuum cost delay for\nautovacuum_vacuum_cost_limit = 200\t# default vacuum cost limit for\ndatestyle = 'iso, mdy'\nclient_encoding = utf-8\t\t# actually, defaults to database\nlc_messages = 'C'\t\t\t# locale for system error message\nlc_monetary = 'C'\t\t\t# locale for monetary formatting\nlc_numeric = 'C'\t\t\t# locale for number formatting\nlc_time = 'C'\t\t\t\t# locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\ndeadlock_timeout = 60s\nmax_locks_per_transaction = 32\t\t# min 10\nregex_flavor = basic\t\t# advanced, extended, or basic\n---------------------------------------------------------------------------------------\n\nThanks a lot.\n-- \n夏清然\nXia Qingran\[email protected]\nSent from Beijing, 11, China\nJoan Crawford - \"I, Joan Crawford, I believe in the dollar.\nEverything I earn, I spend.\" -\nhttp://www.brainyquote.com/quotes/authors/j/joan_crawford.html\n", "msg_date": "Sun, 27 Sep 2009 14:11:18 +0800", "msg_from": "Xia Qingran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "On Sat, Sep 26, 2009 at 10:59 PM, Craig James\n<[email protected]> wrote:\n>\n> If your user_id is always in a narrow range like this, or even in any range\n> that is a small fraction of the total, then add a range condition, like\n> this:\n>\n> select * from event where user_id <= 500 and user_id >= 0 and user_id in\n> (...)\n>\n> I did this exact same thing in my application and it worked well.\n>\n> Craig\n>\n\nIt is a good idea. But In my application, most of the queries' user_id\nare random and difficult to range.\nThanks anyway.\n\n\n\n-- \n夏清然\nXia Qingran\[email protected]\nSent from Beijing, 11, China\nCharles de Gaulle - \"The better I get to know men, the more I find\nmyself loving dogs.\" -\nhttp://www.brainyquote.com/quotes/authors/c/charles_de_gaulle.html\n", "msg_date": "Sun, 27 Sep 2009 14:13:12 +0800", "msg_from": "Xia Qingran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "Xia Qingran wrote:\n> On Sun, Sep 27, 2009 at 1:03 AM, Tom Lane <[email protected]> wrote:\n>> Xia Qingran <[email protected]> writes:\n>>> I have a big performance problem in my SQL select query:\n>>> select * from event where user_id in\n>>> (500,499,498, ... ,1,0);\n>>> The above SELECT always spends 1200ms.\n>> Your EXPLAIN ANALYZE shows that the actual runtime is only about 240ms.\n>> So either the planning time is about 1000ms, or transmitting and\n>> displaying the 134K rows produced by the query takes that long, or some\n>> combination of the two. I wouldn't be too surprised if it's the data\n>> display that's slow; but if it's the planning time that you're unhappy\n>> about, updating to a more recent PG release might possibly help. What\n>> version is this anyway?\n>>\n>> regards, tom lane\n> \n> Oh, It is a problem.\n\nI don't see where the \"Total runtime\" information is in your first message.\n\nAlso, did you run VACUUM FULL ANALYZE lately?\n\n> Forgot to talk about my platform. I am running PostgreSQL 8.4.0 on\n> FreeBSD 7.2-amd64 box, which has dual Xeon 5410 CPUs, 8GB memory and 2\n> SATA disks.\n> \n> And my postgresql.conf is listed as follow:\n> ---------------------------------------------------------------------------------------\n> \n> listen_addresses = '*'\t\t# what IP address(es) to listen on;\n> port = 5432\t\t\t\t# (change requires restart)\n> max_connections = 88\t\t\t# (change requires restart)\n> superuser_reserved_connections = 3\n> ssl = off\t\t\t\t# (change requires restart)\n> tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n> tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n> tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n> shared_buffers = 2048MB\t\t\t# min 128kB or max_connections*16kB\n\nFor start I think you will need to make shared_buffers larger than your \nindex to get decent performance - try setting it to 4096 MB and see if \nit helps.\n\n> temp_buffers = 32MB\t\t\t# min 800kB\n> max_prepared_transactions = 150\t\t# can be 0 or more, 0 to shutdown the\n> prepared transactions.\n> work_mem = 8MB\t\t\t\t# min 64kB\n\nDepending on the type of your workload (how many clients are connected \nand how complex are the queries) you might want to increase work_mem \nalso. Try 16 MB - 32 MB or more and see if it helps.\n\n> fsync = off\t\t\t\t# turns forced synchronization on or off\n> synchronous_commit = off\t\t# immediate fsync at commit\n\nOfftopic - you probably know what you are doing by disabling these, right?\n\n", "msg_date": "Wed, 30 Sep 2009 15:20:36 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "Hi Xia,\n\nTry this patch:\n\nhttp://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch\n\nIt's a hack, but it works for us. I think you're probably spending\nmost of your query time planning, and this patch helps speed things up\n10x over here.\n\nRegards,\nOmar\n\nOn Sun, Sep 27, 2009 at 5:13 PM, Xia Qingran <[email protected]> wrote:\n> On Sat, Sep 26, 2009 at 10:59 PM, Craig James\n> <[email protected]> wrote:\n>>\n>> If your user_id is always in a narrow range like this, or even in any range\n>> that is a small fraction of the total, then add a range condition, like\n>> this:\n>>\n>> select * from event where user_id <= 500 and user_id >= 0 and user_id in\n>> (...)\n>>\n>> I did this exact same thing in my application and it worked well.\n>>\n>> Craig\n>>\n>\n> It is a good idea. But In my application, most of the queries' user_id\n> are random and difficult to range.\n> Thanks anyway.\n>\n>\n>\n> --\n> 夏清然\n> Xia Qingran\n> [email protected]\n> Sent from Beijing, 11, China\n> Charles de Gaulle  - \"The better I get to know men, the more I find\n> myself loving dogs.\" -\n> http://www.brainyquote.com/quotes/authors/c/charles_de_gaulle.html\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 5 Oct 2009 12:58:52 +1100", "msg_from": "Omar Kilani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "On Sun, Oct 4, 2009 at 9:58 PM, Omar Kilani <[email protected]> wrote:\n> Hi Xia,\n>\n> Try this patch:\n>\n> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch\n>\n> It's a hack, but it works for us. I think you're probably spending\n> most of your query time planning, and this patch helps speed things up\n> 10x over here.\n\nWoof. I can see that helping in some situations, but what a foot-gun!\n\n...Robert\n", "msg_date": "Mon, 5 Oct 2009 08:01:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "Robert,\n\nOn Mon, Oct 5, 2009 at 11:01 PM, Robert Haas <[email protected]> wrote:\n> On Sun, Oct 4, 2009 at 9:58 PM, Omar Kilani <[email protected]> wrote:\n>> Hi Xia,\n>>\n>> Try this patch:\n>>\n>> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch\n>>\n>> It's a hack, but it works for us. I think you're probably spending\n>> most of your query time planning, and this patch helps speed things up\n>> 10x over here.\n>\n> Woof.  I can see that helping in some situations, but what a foot-gun!\n\nWe've run that patch for about 4 years (originally coded for us by\nNeil Conway for 8.2, I think), and have never seen any negatives from\nit.\n\nI'm not really sure what the alternatives are -- it never really makes\nsense to get the selectivity for thousands of items in the IN clause.\nI've never seen a different plan for the same query against a DB with\nthat patch vs without -- it just takes a huge amount of time longer to\nrun without it. :)\n\nBut yeah, definitely a hack, and should only be used if needed --\nhopefully there's some sort of official solution on the horizon. :)\n\n> ...Robert\n\nRegards,\nOmar\n", "msg_date": "Mon, 5 Oct 2009 23:24:35 +1100", "msg_from": "Omar Kilani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "On Mon, Oct 5, 2009 at 1:24 PM, Omar Kilani <[email protected]> wrote:\n\n>\n>\n> I'm not really sure what the alternatives are -- it never really makes\n> sense to get the selectivity for thousands of items in the IN clause.\n> I've never seen a different plan for the same query against a DB with\n> that patch vs without -- it just takes a huge amount of time longer to\n> run without it. :)\n>\n> But yeah, definitely a hack, and should only be used if needed --\n> hopefully there's some sort of official solution on the horizon. :)\n>\n\nstart using temporary tables, transactions, and joins.\nDepending on source of the data (if the source is another query, than just\ncombine it in one query with join), otherwise create temp table, fill out\nwith data, and run query with join.\nIf you do all that in transaction, it will be very fast.\n\n-- \nGJ\n\nOn Mon, Oct 5, 2009 at 1:24 PM, Omar Kilani <[email protected]> wrote:\n\n\nI'm not really sure what the alternatives are -- it never really makes\nsense to get the selectivity for thousands of items in the IN clause.\nI've never seen a different plan for the same query against a DB with\nthat patch vs without -- it just takes a huge amount of time longer to\nrun without it. :)\n\nBut yeah, definitely a hack, and should only be used if needed --\nhopefully there's some sort of official solution on the horizon. :)\nstart using temporary tables, transactions, and joins. Depending on source of the data (if the source is another query, than just combine it in one query with join), otherwise create temp table, fill out with data, and run query with join. \nIf you do all that in transaction, it will be very fast. -- GJ", "msg_date": "Mon, 5 Oct 2009 13:30:28 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani <[email protected]> wrote:\n> Hi Xia,\n>\n> Try this patch:\n>\n> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch\n>\n> It's a hack, but it works for us. I think you're probably spending\n> most of your query time planning, and this patch helps speed things up\n> 10x over here.\n\nThanks!\nI am trying it.\n\nRegards,\n\nXia Qingran\n\n>\n> Regards,\n> Omar\n>\n> On Sun, Sep 27, 2009 at 5:13 PM, Xia Qingran <[email protected]> wrote:\n>> On Sat, Sep 26, 2009 at 10:59 PM, Craig James\n>> <[email protected]> wrote:\n>>>\n>>> If your user_id is always in a narrow range like this, or even in any range\n>>> that is a small fraction of the total, then add a range condition, like\n>>> this:\n>>>\n>>> select * from event where user_id <= 500 and user_id >= 0 and user_id in\n>>> (...)\n>>>\n>>> I did this exact same thing in my application and it worked well.\n>>>\n>>> Craig\n>>>\n>>\n>> It is a good idea. But In my application, most of the queries' user_id\n>> are random and difficult to range.\n>> Thanks anyway.\n>>\n>>\n>>\n>> --\n>> 夏清然\n>> Xia Qingran\n>> [email protected]\n>> Sent from Beijing, 11, China\n>> Charles de Gaulle  - \"The better I get to know men, the more I find\n>> myself loving dogs.\" -\n>> http://www.brainyquote.com/quotes/authors/c/charles_de_gaulle.html\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n\n\n\n-- \n夏清然\nXia Qingran\[email protected]\nSent from Beijing, 11, China\nStephen Leacock - \"I detest life-insurance agents: they always argue\nthat I shall some day die, which is not so.\" -\nhttp://www.brainyquote.com/quotes/authors/s/stephen_leacock.html\n", "msg_date": "Fri, 9 Oct 2009 20:31:54 +0800", "msg_from": "Xia Qingran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" }, { "msg_contents": "On Fri, Oct 09, 2009 at 08:31:54PM +0800, Xia Qingran wrote:\n> On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani <[email protected]> wrote:\n> > Hi Xia,\n> >\n> > Try this patch:\n> >\n> > http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch\n> >\n> > It's a hack, but it works for us. I think you're probably spending\n> > most of your query time planning, and this patch helps speed things up\n> > 10x over here.\n> \n> Thanks!\n> I am trying it.\n> \n> Regards,\n> \n> Xia Qingran\n> \n\nWe have a similar situation when using DSPAM with a PostgreSQL\nbackend. In that case we used a function like the following to\nspeed up the lookups. I do not know if it would be useful in\nyour situation, but I thought I would post it for the group:\n\nThe original query was of the form:\n\nSELECT uid, token, spam_hits, innocent_hits FROM dspam_token_data\nWHERE uid = 'xxx' AND token IN (...);\n\nThe faster version of the query in the current code is:\n\nSELECT * FROM lookup_tokens(%d, '{...});\n\nwhere lookup_tokens is defined as follows:\n\ncreate function lookup_tokens(integer,bigint[])\n returns setof dspam_token_data\n language plpgsql stable\n as '\ndeclare\n v_rec record;\nbegin\n for v_rec in select * from dspam_token_data\n where uid=$1\n and token in (select $2[i]\n from generate_series(array_lower($2,1),\n array_upper($2,1)) s(i))\n loop\n return next v_rec;\n end loop;\n return;\nend;';\n\nAnyway, you may want to try a similar approach instead of the\nposted code change.\n\nRegards,\nKen\n\n> >\n> > Regards,\n> > Omar\n> >\n> > On Sun, Sep 27, 2009 at 5:13 PM, Xia Qingran <[email protected]> wrote:\n> >> On Sat, Sep 26, 2009 at 10:59 PM, Craig James\n> >> <[email protected]> wrote:\n> >>>\n> >>> If your user_id is always in a narrow range like this, or even in any range\n> >>> that is a small fraction of the total, then add a range condition, like\n> >>> this:\n> >>>\n> >>> select * from event where user_id <= 500 and user_id >= 0 and user_id in\n> >>> (...)\n> >>>\n> >>> I did this exact same thing in my application and it worked well.\n> >>>\n> >>> Craig\n> >>>\n> >>\n> >> It is a good idea. But In my application, most of the queries' user_id\n> >> are random and difficult to range.\n> >> Thanks anyway.\n> >>\n> >>\n> >>\n> >> --\n> >> ?????????\n> >> Xia Qingran\n> >> [email protected]\n> >> Sent from Beijing, 11, China\n> >> Charles de Gaulle ??- \"The better I get to know men, the more I find\n> >> myself loving dogs.\" -\n> >> http://www.brainyquote.com/quotes/authors/c/charles_de_gaulle.html\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list ([email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >\n> \n> \n> \n> -- \n> ?????????\n> Xia Qingran\n> [email protected]\n> Sent from Beijing, 11, China\n> Stephen Leacock - \"I detest life-insurance agents: they always argue\n> that I shall some day die, which is not so.\" -\n> http://www.brainyquote.com/quotes/authors/s/stephen_leacock.html\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n", "msg_date": "Fri, 9 Oct 2009 08:09:05 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad performance of SELECT ... where id IN (...)" } ]
[ { "msg_contents": "Hi all..\nplease, how can i tune postgres performance?\n\nThanks.\n", "msg_date": "Sun, 27 Sep 2009 23:13:40 -0700 (PDT)", "msg_from": "std pik <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres performance" }, { "msg_contents": "std pik wrote:\n> Hi all..\n> please, how can i tune postgres performance?\n> \n> Thanks.\n\nThats a very generic question. Here are some generic answers:\n\nYou can tune the hardware underneath. Faster hardware = faster pg.\n\nYou can tune the memory usage, and other postgres.conf setting to match \nyour hardware. See the online manuals.\n\nYou can tune a single slow query, use explain analyze.\n\n-Andy\n", "msg_date": "Mon, 28 Sep 2009 09:11:17 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres performance" }, { "msg_contents": "Didn't see the original message so I replied to this one.\n\nOn Mon, Sep 28, 2009 at 8:11 AM, Andy Colson <[email protected]> wrote:\n> std pik wrote:\n>>\n>> Hi all..\n>> please, how can i tune postgres performance?\n\nStart here:\nhttp://www.westnet.com/~gsmith/content/postgresql/\n", "msg_date": "Mon, 28 Sep 2009 15:16:26 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres performance" }, { "msg_contents": "Now why did I only just get this message overnight today, after having\ngotten the preceding two much sooner?\n\n...Robert\n\nOn Mon, Sep 28, 2009 at 2:13 AM, std pik <[email protected]> wrote:\n> Hi all..\n> please, how can i tune postgres performance?\n>\n> Thanks.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sun, 4 Oct 2009 07:06:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Postgres performance" }, { "msg_contents": "Hi,\n\nthere are several performance related issues, thereby it's rather \ndifficult to answer your question shortly.\nYou have to keep in mind not only postgres itself, hardware is also an \nimportant factor.\nDo you have performance problems, which you can describe more detailed ?\n\nregards..GERD..\n\nAm 28.09.2009 um 08:13 schrieb std pik:\n\n> Hi all..\n> please, how can i tune postgres performance?\n>\n> Thanks.\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sun, 4 Oct 2009 13:08:19 +0200", "msg_from": "Gerd Koenig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres performance" } ]
[ { "msg_contents": "Hello,\n\nI am using PostgreSQL 8.3.7 and I am experiencing an issue similar to \nthe one I've already described some time ago:\nhttp://archives.postgresql.org/pgsql-performance/2009-02/msg00261.php\n\nAgain, adding a LIMIT clause to a query, which is normally executing \nvery fast thanks to an index, makes it perform slow, because the planner \nno longer uses the \"correct\" index.\n\nI have the following table:\n\nCREATE TABLE message (\n message_sid SERIAL PRIMARY KEY,\n from_profile_sid INT NOT NULL REFERENCES profile,\n to_profile_sid INT NOT NULL REFERENCES profile,\n sender_has_deleted BOOLEAN NOT NULL DEFAULT FALSE,\n receiver_has_deleted BOOLEAN NOT NULL DEFAULT FALSE,\n body TEXT,\n datetime TIMESTAMP NOT NULL DEFAULT NOW()\n);\n\n\nWith the following conditional index:\n\nCREATE INDEX message_to_profile_idx ON message (to_profile_sid) WHERE \nNOT receiver_has_deleted;\n\n\nThe query to obtain the list of received messages of a profile is simple \nand executes very fast, because of the index above:\n\ndb=# EXPLAIN ANALYZE SELECT\n *\nFROM\n message\nWHERE\n to_profile_sid = -1\nAND\n NOT receiver_has_deleted\nORDER BY\n message_sid DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=11857.09..11866.19 rows=3640 width=277) (actual \ntime=0.317..0.319 rows=15 loops=1)\n Sort Key: message_sid\n Sort Method: quicksort Memory: 32kB\n -> Bitmap Heap Scan on message (cost=106.44..11641.78 rows=3640 \nwidth=277) (actual time=0.096..0.271 rows=15 loops=1)\n Recheck Cond: ((to_profile_sid = (-1)) AND (NOT \nreceiver_has_deleted))\n -> Bitmap Index Scan on message_to_profile_idx \n(cost=0.00..105.53 rows=3640 width=0) (actual time=0.056..0.056 rows=21 \nloops=1)\n Index Cond: (to_profile_sid = (-1))\n Total runtime: 0.383 ms\n(8 rows)\n\n\nAdding a LIMIT clause to exactly the same query slows its execution more \nthan 20'000 times:\n\ndb=# EXPLAIN ANALYZE SELECT\n *\nFROM\n message\nWHERE\n to_profile_sid = -1\nAND\n NOT receiver_has_deleted\nORDER BY\n message_sid DESC LIMIT 20;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..6513.60 rows=20 width=277) (actual \ntime=0.617..6576.539 rows=15 loops=1)\n -> Index Scan Backward using message_pkey on message \n(cost=0.00..1185474.32 rows=3640 width=277) (actual time=0.617..6576.522 \nrows=15 loops=1)\n Filter: ((NOT receiver_has_deleted) AND (to_profile_sid = (-1)))\n Total runtime: 6576.572 ms\n(4 rows)\n\n\nJust as I was advised in my recent post, I've already increased the \nstatistics of both fields all the way till 1000, analyzed the table and \nreindexed the index:\n\nALTER TABLE message ALTER COLUMN to_profile_sid SET STATISTICS 1000;\nALTER TABLE message ALTER COLUMN receiver_has_deleted SET STATISTICS 1000;\nANALYZE message;\nREINDEX index message_to_profile_idx;\n\n\nThis time, however, the steps above didn't affect the planner in any \nway, it still refuses to use the index \"message_to_profile_idx\" when a \nLIMIT is involved (for this particular value of to_profile_sid).\n\nHere's some statistical data:\n\ndb=# SELECT COUNT(*) FROM message;\n count\n---------\n 1312213\n(1 row)\n\ndb=# SELECT COUNT(*) FROM message WHERE to_profile_sid = -1;\n count\n-------\n 5604\n(1 row)\n\ndb=# SELECT COUNT(*) FROM message WHERE to_profile_sid = -1 AND NOT \nreceiver_has_deleted;\n count\n-------\n 15\n(1 row)\n\ndb=# SELECT COUNT(DISTINCT to_profile_sid) FROM message;\n count\n-------\n 8596\n(1 row)\n\ndb=# SELECT AVG(length) FROM (SELECT to_profile_sid, COUNT(*) AS length \nFROM message GROUP BY to_profile_sid) AS freq;\n avg\n----------------------\n 152.6540251279664960\n(1 row)\n\ndb=# SELECT n_distinct FROM pg_stats WHERE tablename='message' AND \nattname='to_profile_sid';\n n_distinct\n------------\n 6277\n(1 row)\n\n\nAlso, the value of -1 for \"to_profile_sid\" is second in the list of \nmost_common_vals in pg_stats, but still I don't understand why a simple \nlimit is blinding the planner for the \"good\" index. Any ideas?\n\nRegards,\n-- \nKouber Saparev\nhttp://kouber.saparev.com/\n", "msg_date": "Mon, 28 Sep 2009 11:43:00 +0300", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT confuses the planner (again)" }, { "msg_contents": "On Mon, Sep 28, 2009 at 4:43 AM, Kouber Saparev <[email protected]> wrote:\n> Hello,\n>\n> I am using PostgreSQL 8.3.7 and I am experiencing an issue similar to the\n> one I've already described some time ago:\n> http://archives.postgresql.org/pgsql-performance/2009-02/msg00261.php\n>\n> Again, adding a LIMIT clause to a query, which is normally executing very\n> fast thanks to an index, makes it perform slow, because the planner no\n> longer uses the \"correct\" index.\n>\n> I have the following table:\n>\n> CREATE TABLE message (\n>  message_sid SERIAL PRIMARY KEY,\n>  from_profile_sid INT NOT NULL REFERENCES profile,\n>  to_profile_sid INT NOT NULL REFERENCES profile,\n>  sender_has_deleted BOOLEAN NOT NULL DEFAULT FALSE,\n>  receiver_has_deleted BOOLEAN NOT NULL DEFAULT FALSE,\n>  body TEXT,\n>  datetime TIMESTAMP NOT NULL DEFAULT NOW()\n> );\n>\n>\n> With the following conditional index:\n>\n> CREATE INDEX message_to_profile_idx ON message (to_profile_sid) WHERE NOT\n> receiver_has_deleted;\n>\n>\n> The query to obtain the list of received messages of a profile is simple and\n> executes very fast, because of the index above:\n>\n> db=# EXPLAIN ANALYZE SELECT\n>  *\n> FROM\n>  message\n> WHERE\n>  to_profile_sid = -1\n> AND\n>  NOT receiver_has_deleted\n> ORDER BY\n>  message_sid DESC;\n>                                                                QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n>  Sort  (cost=11857.09..11866.19 rows=3640 width=277) (actual\n> time=0.317..0.319 rows=15 loops=1)\n>   Sort Key: message_sid\n>   Sort Method:  quicksort  Memory: 32kB\n>   ->  Bitmap Heap Scan on message  (cost=106.44..11641.78 rows=3640\n> width=277) (actual time=0.096..0.271 rows=15 loops=1)\n>         Recheck Cond: ((to_profile_sid = (-1)) AND (NOT\n> receiver_has_deleted))\n>         ->  Bitmap Index Scan on message_to_profile_idx (cost=0.00..105.53\n> rows=3640 width=0) (actual time=0.056..0.056 rows=21 loops=1)\n>               Index Cond: (to_profile_sid = (-1))\n>  Total runtime: 0.383 ms\n> (8 rows)\n>\n>\n> Adding a LIMIT clause to exactly the same query slows its execution more\n> than 20'000 times:\n>\n> db=# EXPLAIN ANALYZE SELECT\n>  *\n> FROM\n>  message\n> WHERE\n>  to_profile_sid = -1\n> AND\n>  NOT receiver_has_deleted\n> ORDER BY\n>  message_sid DESC LIMIT 20;\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=0.00..6513.60 rows=20 width=277) (actual time=0.617..6576.539\n> rows=15 loops=1)\n>   ->  Index Scan Backward using message_pkey on message\n> (cost=0.00..1185474.32 rows=3640 width=277) (actual time=0.617..6576.522\n> rows=15 loops=1)\n>         Filter: ((NOT receiver_has_deleted) AND (to_profile_sid = (-1)))\n>  Total runtime: 6576.572 ms\n> (4 rows)\n>\n>\n> Just as I was advised in my recent post, I've already increased the\n> statistics of both fields all the way till 1000, analyzed the table and\n> reindexed the index:\n>\n> ALTER TABLE message ALTER COLUMN to_profile_sid SET STATISTICS 1000;\n> ALTER TABLE message ALTER COLUMN receiver_has_deleted SET STATISTICS 1000;\n> ANALYZE message;\n> REINDEX index message_to_profile_idx;\n>\n>\n> This time, however, the steps above didn't affect the planner in any way, it\n> still refuses to use the index \"message_to_profile_idx\" when a LIMIT is\n> involved (for this particular value of to_profile_sid).\n>\n> Here's some statistical data:\n>\n> db=# SELECT COUNT(*) FROM message;\n>  count\n> ---------\n>  1312213\n> (1 row)\n>\n> db=# SELECT COUNT(*) FROM message WHERE to_profile_sid = -1;\n>  count\n> -------\n>  5604\n> (1 row)\n>\n> db=# SELECT COUNT(*) FROM message WHERE to_profile_sid = -1 AND NOT\n> receiver_has_deleted;\n>  count\n> -------\n>    15\n> (1 row)\n>\n> db=# SELECT COUNT(DISTINCT to_profile_sid) FROM message;\n>  count\n> -------\n>  8596\n> (1 row)\n>\n> db=# SELECT AVG(length) FROM (SELECT to_profile_sid, COUNT(*) AS length FROM\n> message GROUP BY to_profile_sid) AS freq;\n>         avg\n> ----------------------\n>  152.6540251279664960\n> (1 row)\n>\n> db=# SELECT n_distinct FROM pg_stats WHERE tablename='message' AND\n> attname='to_profile_sid';\n>  n_distinct\n> ------------\n>       6277\n> (1 row)\n>\n>\n> Also, the value of -1 for \"to_profile_sid\" is second in the list of\n> most_common_vals in pg_stats, but still I don't understand why a simple\n> limit is blinding the planner for the \"good\" index. Any ideas?\n\nIt would be good to see what the planner's second choice would be, if\nit didn't have that other index.\n\nBEGIN;\nDROP INDEX message_pkey;\nEXPLAIN ANALYZE ...\nROLLBACK;\n\nHowever, I suspect what's going on here is as follows. When trying to\nestimate the cost of LIMIT, the planner takes the startup cost for the\nsubpath and a pro-rata share of the run cost, based on the number of\nrows being fetched as a fraction of the total number it believes to be\npresent. So if the run cost is estimated to be lower than it really\nis, some other plan with a lower startup cost can look like a better\nchoice, even if the run cost is much higher (because only a tiny\nfraction of the run cost is being counted).\n\nThe reason why the run cost is being misestimated is because the\nplanner is estimating that the fraction of rows where to_profile_sid =\n-1 and NOT receiver_has_deleted is equal to the fraction where\nto_profile_sid = -1 multiplied by the fraction where NOT\nreceived_has_deleted - and it isn't. You might try creating a partial\nindex on message_sid WHERE NOT receiver_has_deleted, and see if that\nhelps.\n\nSee also:\n\nhttp://archives.postgresql.org/pgsql-performance/2009-06/msg00023.php\n\n...Robert\n", "msg_date": "Mon, 28 Sep 2009 07:38:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner (again)" } ]
[ { "msg_contents": "I need to retrieve the most recent prices per products from a price list table:\n\nCREATE TABLE listini_anagrafici\n(\n id character varying(36) NOT NULL,\n articolo character varying(18),\n listino character varying(5),\n data_ent date,\n data_fin date,\n prezzo double precision,\n ultimo boolean DEFAULT false,\n date_entered timestamp without time zone NOT NULL,\n date_modified timestamp without time zone NOT NULL,\n created_by character varying(36),\n modified_user_id character varying(36) NOT NULL,\n deleted boolean NOT NULL DEFAULT false,\n CONSTRAINT listini_anagrafici_id_key UNIQUE (id)\n)\n\nI guess the right query is:\n\nselect distinct on (articolo) articolo,data_ent,prezzo from listini_anagrafici order by articolo, data_ent desc\n\nbut it seems that this query runs slowly... about 5/6 seconds.\nthe table contains more or less 500K records, PostgreSQL version is 8.1.11 and the server has 4gb of RAM entirely dedicate to the db.\n\nI've tried adding this index \n\nCREATE INDEX articolo_data_ent ON listini_anagrafici (articoli, data_ent)\n\nbut it doesn't helps. \n\nAs you can see from the explain command (below) the query seems to ignore the index\n\n'Unique (cost=73897.58..76554.94 rows=77765 width=24)'\n' -> Sort (cost=73897.58..75226.26 rows=531472 width=24)'\n' Sort Key: articolo, data_ent'\n' -> Seq Scan on listini_anagrafici (cost=0.00..16603.72 rows=531472 width=24)'\n\nanyone knows how to make this query run faster?\n\n\n\n\n\n\n\n\n\n\n\n \nI need to retrieve the most recent prices per \nproducts from a price list table:\n \nCREATE TABLE listini_anagrafici(  id \ncharacter varying(36) NOT NULL,  articolo character \nvarying(18),  listino character varying(5),  data_ent \ndate,  data_fin date,  prezzo double precision,  \nultimo boolean DEFAULT false,  date_entered timestamp without time zone \nNOT NULL,  date_modified timestamp without time zone NOT \nNULL,  created_by character varying(36),  modified_user_id \ncharacter varying(36) NOT NULL,  deleted boolean NOT NULL DEFAULT \nfalse,  CONSTRAINT listini_anagrafici_id_key UNIQUE \n(id))\n \nI guess the right query is:\n \nselect distinct on (articolo) \narticolo,data_ent,prezzo from listini_anagrafici order by articolo, data_ent \ndesc\n \nbut it seems that this query runs slowly... about \n5/6 seconds.\nthe table contains more or less 500K records, \nPostgreSQL version is 8.1.11 and the server has 4gb of RAM entirely dedicate to \nthe db.\n \nI've tried adding this index \n \nCREATE INDEX articolo_data_ent ON \nlistini_anagrafici (articoli, data_ent)\n \nbut it doesn't helps. \n \nAs you can see from the explain command (below) the \nquery seems to ignore the index\n \n'Unique  (cost=73897.58..76554.94 rows=77765 \nwidth=24)''  ->  Sort  (cost=73897.58..75226.26 \nrows=531472 width=24)''        Sort Key: \narticolo, data_ent''        ->  \nSeq Scan on listini_anagrafici  (cost=0.00..16603.72 rows=531472 \nwidth=24)'\n \nanyone knows how to make this query run \nfaster?", "msg_date": "Mon, 28 Sep 2009 19:18:48 +0200", "msg_from": "\"Sgarbossa Domenico\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with DISTINCT ON" }, { "msg_contents": "The index can produce the sorted output. Add a dummy WHERE clause like\narticoli > <min_value> and data_ent > <min_value>.\n\n\n--Imad\n\nOn Mon, Sep 28, 2009 at 10:18 PM, Sgarbossa Domenico\n<[email protected]> wrote:\n>\n> I need to retrieve the most recent prices per products from a price list\n> table:\n>\n> CREATE TABLE listini_anagrafici\n> (\n>   id character varying(36) NOT NULL,\n>   articolo character varying(18),\n>   listino character varying(5),\n>   data_ent date,\n>   data_fin date,\n>   prezzo double precision,\n>   ultimo boolean DEFAULT false,\n>   date_entered timestamp without time zone NOT NULL,\n>   date_modified timestamp without time zone NOT NULL,\n>   created_by character varying(36),\n>   modified_user_id character varying(36) NOT NULL,\n>   deleted boolean NOT NULL DEFAULT false,\n>   CONSTRAINT listini_anagrafici_id_key UNIQUE (id)\n> )\n>\n> I guess the right query is:\n>\n> select distinct on (articolo) articolo,data_ent,prezzo from\n> listini_anagrafici order by articolo, data_ent desc\n>\n> but it seems that this query runs slowly... about 5/6 seconds.\n> the table contains more or less 500K records, PostgreSQL version is 8.1.11\n> and the server has 4gb of RAM entirely dedicate to the db.\n>\n> I've tried adding this index\n>\n> CREATE INDEX articolo_data_ent ON listini_anagrafici (articoli, data_ent)\n>\n> but it doesn't helps.\n>\n> As you can see from the explain command (below) the query seems to ignore\n> the index\n>\n> 'Unique  (cost=73897.58..76554.94 rows=77765 width=24)'\n> '  ->  Sort  (cost=73897.58..75226.26 rows=531472 width=24)'\n> '        Sort Key: articolo, data_ent'\n> '        ->  Seq Scan on listini_anagrafici  (cost=0.00..16603.72\n> rows=531472 width=24)'\n>\n> anyone knows how to make this query run faster?\n>\n>\n>\n>\n", "msg_date": "Sun, 4 Oct 2009 07:27:46 +0500", "msg_from": "imad <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with DISTINCT ON" }, { "msg_contents": "\"Sgarbossa Domenico\" <[email protected]> writes:\n> I guess the right query is:\n\n> select distinct on (articolo) articolo,data_ent,prezzo from listini_anagrafici order by articolo, data_ent desc\n\n> but it seems that this query runs slowly... about 5/6 seconds.\n\n> I've tried adding this index \n> CREATE INDEX articolo_data_ent ON listini_anagrafici (articoli, data_ent)\n> but it doesn't helps. \n\nThat index doesn't match the query ordering. You could do\n\nselect distinct on (articolo) articolo,data_ent,prezzo from listini_anagrafici order by articolo desc, data_ent desc\n\nIn more recent versions of Postgres you could make an index with one\ncolumn ascending and the other descending, but AFAIR 8.1 doesn't have\nthat.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Oct 2009 22:53:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with DISTINCT ON " } ]
[ { "msg_contents": "I need to retrieve the most recent prices per products from a price list table:\n\nCREATE TABLE listini_anagrafici\n(\n id character varying(36) NOT NULL,\n articolo character varying(18),\n listino character varying(5),\n data_ent date,\n data_fin date,\n prezzo double precision,\n ultimo boolean DEFAULT false,\n date_entered timestamp without time zone NOT NULL,\n date_modified timestamp without time zone NOT NULL,\n created_by character varying(36),\n modified_user_id character varying(36) NOT NULL,\n deleted boolean NOT NULL DEFAULT false,\n CONSTRAINT listini_anagrafici_id_key UNIQUE (id)\n)\n\nI guess the right query is:\n\nselect distinct on (articolo) articolo,data_ent,prezzo from listini_anagrafici order by articolo, data_ent desc\n\nbut it seems that this query runs slowly... about 5/6 seconds.\nthe table contains more or less 500K records, PostgreSQL version is 8.1.11 and the server has 4gb of RAM entirely dedicate to the db.\n\nI've tried adding this index \n\nCREATE INDEX articolo_data_ent ON listini_anagrafici (articoli, data_ent)\n\nbut it doesn't helps. \n\nAs you can see from the explain command (below) the query seems to ignore the index\n\n'Unique (cost=73893.89..76551.25 rows=88312 width=24) (actual time=4022.578..5076.206 rows=193820 loops=1)'\n' -> Sort (cost=73893.89..75222.57 rows=531472 width=24) (actual time=4022.574..4505.538 rows=531472 loops=1)'\n' Sort Key: articolo, data_ent'\n' -> Seq Scan on listini_anagrafici (cost=0.00..16603.72 rows=531472 width=24) (actual time=0.009..671.797 rows=531472 loops=1)'\n'Total runtime: 5217.452 ms'\n\n\nanyone knows how to make this query run faster?\n\n\n\n\n \n\n\n\n\n\n\n \nI need to retrieve the most recent prices per \nproducts from a price list table:\n \nCREATE TABLE listini_anagrafici(  id \ncharacter varying(36) NOT NULL,  articolo character \nvarying(18),  listino character varying(5),  data_ent \ndate,  data_fin date,  prezzo double precision,  \nultimo boolean DEFAULT false,  date_entered timestamp without time zone \nNOT NULL,  date_modified timestamp without time zone NOT \nNULL,  created_by character varying(36),  modified_user_id \ncharacter varying(36) NOT NULL,  deleted boolean NOT NULL DEFAULT \nfalse,  CONSTRAINT listini_anagrafici_id_key UNIQUE \n(id))\n \nI guess the right query is:\n \nselect distinct on (articolo) \narticolo,data_ent,prezzo from listini_anagrafici order by articolo, data_ent \ndesc\n \nbut it seems that this query runs slowly... about \n5/6 seconds.\nthe table contains more or less 500K records, \nPostgreSQL version is 8.1.11 and the server has 4gb of RAM entirely dedicate to \nthe db.\n \nI've tried adding this index \n \nCREATE INDEX articolo_data_ent ON \nlistini_anagrafici (articoli, data_ent)\n \nbut it doesn't helps. \n \nAs you can see from the explain command (below) the \nquery seems to ignore the index\n \n'Unique  (cost=73893.89..76551.25 rows=88312 \nwidth=24) (actual time=4022.578..5076.206 rows=193820 loops=1)''  \n->  Sort  (cost=73893.89..75222.57 rows=531472 width=24) (actual \ntime=4022.574..4505.538 rows=531472 \nloops=1)''        Sort Key: articolo, \ndata_ent''        ->  Seq Scan on \nlistini_anagrafici  (cost=0.00..16603.72 rows=531472 width=24) (actual \ntime=0.009..671.797 rows=531472 loops=1)''Total runtime: 5217.452 \nms'\n \nanyone knows how to make this query run \nfaster?", "msg_date": "Tue, 29 Sep 2009 08:55:17 +0200", "msg_from": "\"Sgarbossa Domenico\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with DISTINCT ON" }, { "msg_contents": "Sgarbossa Domenico wrote:\n> I need to retrieve the most recent prices per products from a price list table:\n\n> select distinct on (articolo) articolo,data_ent,prezzo from listini_anagrafici order by articolo, data_ent desc\n> \n> but it seems that this query runs slowly... about 5/6 seconds.\n> the table contains more or less 500K records, PostgreSQL version is 8.1.11 and the server has 4gb of RAM entirely dedicate to the db.\n\n> 'Unique (cost=73893.89..76551.25 rows=88312 width=24) (actual time=4022.578..5076.206 rows=193820 loops=1)'\n> ' -> Sort (cost=73893.89..75222.57 rows=531472 width=24) (actual time=4022.574..4505.538 rows=531472 loops=1)'\n> ' Sort Key: articolo, data_ent'\n> ' -> Seq Scan on listini_anagrafici (cost=0.00..16603.72 rows=531472 width=24) (actual time=0.009..671.797 rows=531472 loops=1)'\n> 'Total runtime: 5217.452 ms'\n\nYou've got 531472 rows in the table and the query is going to output\n193820 of them. Scanning the whole table is almost certainly the way to go.\n\nIf the table doesn't change much, you could try running a CLUSTER on the\nindex you've created. That will lock the table while it re-orders the\nphysical layout of the rows based on your index though, so it's no good\nif the table is updated much.\n\nFailing that, you could try issuing \"set work_mem = ...\" before the\nquery with increasing sizes for work_mem. That might make the sort\nfaster too.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 29 Sep 2009 09:28:23 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with DISTINCT ON" }, { "msg_contents": "Subject: Re: [PERFORM] Performance problems with DISTINCT ON\n\n\n> Sgarbossa Domenico wrote:\n>> I need to retrieve the most recent prices per products from a price list \n>> table:\n>\n>> select distinct on (articolo) articolo,data_ent,prezzo from \n>> listini_anagrafici order by articolo, data_ent desc\n>>\n>> but it seems that this query runs slowly... about 5/6 seconds.\n>> the table contains more or less 500K records, PostgreSQL version is \n>> 8.1.11 and the server has 4gb of RAM entirely dedicate to the db.\n>\n>> 'Unique (cost=73893.89..76551.25 rows=88312 width=24) (actual \n>> time=4022.578..5076.206 rows=193820 loops=1)'\n>> ' -> Sort (cost=73893.89..75222.57 rows=531472 width=24) (actual \n>> time=4022.574..4505.538 rows=531472 loops=1)'\n>> ' Sort Key: articolo, data_ent'\n>> ' -> Seq Scan on listini_anagrafici (cost=0.00..16603.72 \n>> rows=531472 width=24) (actual time=0.009..671.797 rows=531472 loops=1)'\n>> 'Total runtime: 5217.452 ms'\n>\n> You've got 531472 rows in the table and the query is going to output\n> 193820 of them. Scanning the whole table is almost certainly the way to \n> go.\n>\n> If the table doesn't change much, you could try running a CLUSTER on the\n> index you've created. That will lock the table while it re-orders the\n> physical layout of the rows based on your index though, so it's no good\n> if the table is updated much.\n>\n> Failing that, you could try issuing \"set work_mem = ...\" before the\n> query with increasing sizes for work_mem. That might make the sort\n> faster too.\n>\n\nThank you for the answer,\nI've tried as you suggest but the only things that seems make some \ndifferences is the work_mem parameter\nThis helps to reduce the amount of time about for the half (3 seconds) but \nunfortunately this ain't enough.\nIf there are a lot of concurrent request I think it could made the data \nswap to the disk.\nShould I try a different approach to solve this issue?\n\n\n\n\n", "msg_date": "Tue, 29 Sep 2009 14:44:49 +0200", "msg_from": "\"Sgarbossa Domenico\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with DISTINCT ON" }, { "msg_contents": "> Should I try a different approach to solve this issue?\n\nYes. Ask yourself if you *really* need 180k rows.\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Tue, 29 Sep 2009 14:53:13 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with DISTINCT ON" }, { "msg_contents": "Distinct on Postgres 8.1 forces a sort. It may be faster if you restructure\nthe query to use a group by + order by. But that might not help either,\nsince the data might not be large enough for a plan that hash aggregates and\nthen sorts the result to be faster.\n\nAn index on (articolo, data_end desc) might help -- but only if the planner\nthinks that the index scan is faster. You may have to tweak the cost\nparameter for random I/O downward to get it to choose a plan to use that\nindex -- which will be faster if the index and data are in memory, but will\nbe slower if it has to go too much to often to disk.\n\nIf this query is being done a lot, and concurrently, it sounds like the\napplication needs some tweaks. The result might be application cacheable\nfor short intervals of time, for example. Or, if only small bits of the\ntable are updated, a timestamp column and filter to select only the parts\nupdated can allow a client application to merge the updates with a previous\nfull result client side.\n\n\nOn 9/29/09 5:44 AM, \"Sgarbossa Domenico\" <[email protected]>\nwrote:\n\n> Subject: Re: [PERFORM] Performance problems with DISTINCT ON\n> \n> \n>> Sgarbossa Domenico wrote:\n>>> I need to retrieve the most recent prices per products from a price list\n>>> table:\n>> \n>>> select distinct on (articolo) articolo,data_ent,prezzo from\n>>> listini_anagrafici order by articolo, data_ent desc\n>>> \n>>> but it seems that this query runs slowly... about 5/6 seconds.\n>>> the table contains more or less 500K records, PostgreSQL version is\n>>> 8.1.11 and the server has 4gb of RAM entirely dedicate to the db.\n>> \n>>> 'Unique (cost=73893.89..76551.25 rows=88312 width=24) (actual\n>>> time=4022.578..5076.206 rows=193820 loops=1)'\n>>> ' -> Sort (cost=73893.89..75222.57 rows=531472 width=24) (actual\n>>> time=4022.574..4505.538 rows=531472 loops=1)'\n>>> ' Sort Key: articolo, data_ent'\n>>> ' -> Seq Scan on listini_anagrafici (cost=0.00..16603.72\n>>> rows=531472 width=24) (actual time=0.009..671.797 rows=531472 loops=1)'\n>>> 'Total runtime: 5217.452 ms'\n>> \n>> You've got 531472 rows in the table and the query is going to output\n>> 193820 of them. Scanning the whole table is almost certainly the way to\n>> go.\n>> \n>> If the table doesn't change much, you could try running a CLUSTER on the\n>> index you've created. That will lock the table while it re-orders the\n>> physical layout of the rows based on your index though, so it's no good\n>> if the table is updated much.\n>> \n>> Failing that, you could try issuing \"set work_mem = ...\" before the\n>> query with increasing sizes for work_mem. That might make the sort\n>> faster too.\n>> \n> \n> Thank you for the answer,\n> I've tried as you suggest but the only things that seems make some\n> differences is the work_mem parameter\n> This helps to reduce the amount of time about for the half (3 seconds) but\n> unfortunately this ain't enough.\n> If there are a lot of concurrent request I think it could made the data\n> swap to the disk.\n> Should I try a different approach to solve this issue?\n> \n> \n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 29 Sep 2009 10:53:17 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with DISTINCT ON" } ]
[ { "msg_contents": "Hi,\n\nI tried to profile postgresql queries with OProfile but I could not do.\nOProfile version: 0.9.5\nPostgreSQL version: 8.4.0\nOS: CentOS 5\n\nI compiled OProfile with \"./configure --with-kernel-support\", \"make\" \nand \"make install\"; also I created a user and a group both named as \n\"oprofile\". User oprofile's default group is oprofile.\n\nPostgreSQL was installed, and there is a db named \"test\". I ran below \ncommands with \"reydan\" user;\n\nsudo opcontrol --init\nsudo opcontrol --setup --no-vmlinux\nmkdir opdeneme (for profile files)\nsudo opcontrol --session-dir=/path/to/opdeneme\nsudo opcontrol --start\nsudo opcontrol --reset\npsql -f deneme.sql test\nsudo opcontrol --dump\nsudo opcontrol --shutdown\nopreport --long-filenames | more (after this command I get below error)\nerror: no sample files found: profile specification too strict ?\nAnd I could not profile with oprofile, what is my fault, please advice..\n\ncontent of deneme.sql:\n\"create table deneme1 as\nselect sid,\n md5((sid*10)::text),\n ((substring(random()::text from 3 for 5))::int+10000) as a\nfrom generate_series(1,1000000) sid;\"\n\nRegards,\n--Reydan\nHi,I tried to profile postgresql queries with OProfile but I could not do. OProfile version: 0.9.5PostgreSQL version: 8.4.0OS: CentOS 5I compiled OProfile with \"./configure --with-kernel-support\", \"make\" and \"make install\"; also I created a user and a group both named as \"oprofile\". User oprofile's default group is oprofile.PostgreSQL was installed, and there is a db named \"test\". I ran below commands with \"reydan\" user;sudo opcontrol --initsudo opcontrol --setup --no-vmlinuxmkdir opdeneme (for profile files)sudo opcontrol --session-dir=/path/to/opdenemesudo opcontrol --startsudo opcontrol --resetpsql -f deneme.sql testsudo opcontrol --dumpsudo opcontrol --shutdownopreport --long-filenames | more (after this command I get below error)error: no sample files found: profile specification too strict ?And I could not profile with oprofile, what is my fault, please advice..content of deneme.sql:\"create table deneme1 asselect  sid,             md5((sid*10)::text),             ((substring(random()::text from 3 for 5))::int+10000) as afrom generate_series(1,1000000) sid;\"Regards,--Reydan", "msg_date": "Tue, 29 Sep 2009 14:07:44 +0300", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Using OProfile" }, { "msg_contents": "Reydan Cankur <[email protected]> writes:\n> I tried to profile postgresql queries with OProfile but I could not do.\n\nYou would be better off asking this of the oprofile people, as I suspect\nyour problem is \"oprofile doesn't work at all\" not \"oprofile doesn't\nwork with postgres\".\n\nFWIW, oprofile requires kernel support which I think is not there in\nRHEL5/CentOS 5. If it is there, then there would also be an oprofile\npackage available; there should be no need for you to build your own.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Sep 2009 10:19:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using OProfile " } ]
[ { "msg_contents": "Hi,\nI have a pretty small database on my home computer (~25Gb). I have three\n250Gb HDDs.\n\nMy setup was 1 HDD for OS (Windows XP) and the other 2 HDD in RAID 0 for\npostgre database.\nWill I see any performance improvement if I instead have 1 HDD for OS, 1 HDD\nfor pg_xlog and 1HDD for the database?\n\nor do you suggest another setup?\n(I'm not really concerned about the redundancy for the database, that's why\nI used RAID 0 up till now, but would save some time if the performance\ndifference is small compared to 3 independent disks)\n\n/Magnus\n\nHi,I have a pretty small database on my home computer (~25Gb). I have three 250Gb HDDs.My setup was 1 HDD for OS (Windows XP) and the other 2  HDD in RAID 0 for postgre database. Will I see any performance improvement if I instead have 1 HDD for OS, 1 HDD for pg_xlog and 1HDD for the database?\nor do you suggest another setup? (I'm not really concerned about  the redundancy for the database, that's why I used RAID 0 up till now, but would save some time if the performance difference is small compared to 3 independent disks)\n/Magnus", "msg_date": "Tue, 29 Sep 2009 21:13:55 +0200", "msg_from": "mange <[email protected]>", "msg_from_op": true, "msg_subject": "Performance RAID 0" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: mange\n> \n> Hi,\n> I have a pretty small database on my home computer (~25Gb). I \n> have three 250Gb HDDs.\n> \n> My setup was 1 HDD for OS (Windows XP) and the other 2 HDD \n> in RAID 0 for postgre database. \n> Will I see any performance improvement if I instead have 1 \n> HDD for OS, 1 HDD for pg_xlog and 1HDD for the database?\n> \n> or do you suggest another setup? \n> (I'm not really concerned about the redundancy for the \n> database, that's why I used RAID 0 up till now, but would \n> save some time if the performance difference is small \n> compared to 3 independent disks)\n> \n> /Magnus\n> \n\nNo. In your scenario, if you proceed in having individual disks attending\ndata and pg_xlog, I bet performance will be degraded.\nFor maximum performance you should construct your RAID 0 array with all\nthree disks for OS and Postgres.\nOf course thats quite risky. You are tripling your chances the whole box\nwill evaporate in case of a disk failure.\n\n\nCheers.\n\n", "msg_date": "Mon, 5 Oct 2009 12:56:52 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance RAID 0" } ]
[ { "msg_contents": "I'm attempting to implement full-text search and am torn between two techniques:\n\n1) Create multiple GIN indexes on columns I'm going to search against and UNION the results\nor\n2) Create one concatenated column GIN index consisting of the columns that will be searched.\n\nIs there any performance considerations that may make one technique better than the other?\n\nThanks for insight,\nJer\n\n________________________________\nAttention:\nThe information contained in this message and or attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any system and destroy any copies.\n\n\n\n\n\n\n\n\n\nI’m attempting to implement full-text search and am torn between two techniques:\n \n1) Create multiple GIN indexes on columns I’m going to search against and UNION the results\nor\n2) Create one concatenated column GIN index consisting of the columns that will be searched.\n \nIs there any performance considerations that may make one technique better than the other?\n \nThanks for insight,\nJer\n\n\n\nAttention:\nThe information contained in this message and or attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of\n any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any system and destroy any copies.", "msg_date": "Tue, 29 Sep 2009 15:34:22 -0400", "msg_from": "Jeremy Ferrante <[email protected]>", "msg_from_op": true, "msg_subject": "FullTextSearch - UNION individual indexes or concatenated columns\n\tindex ?" }, { "msg_contents": "Jeremy,\n\n> Is there any performance considerations that may make one technique\n> better than the other?\n\n(2) will me much faster. Assuming you're searching all columns, of course.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Tue, 29 Sep 2009 15:58:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FullTextSearch - UNION individual indexes or concatenated\n\tcolumns index ?" } ]
[ { "msg_contents": "\nEpisode umpteen of the ongoing saga with my GiST indexes.\n\nFor some reason, GiST uses loads of CPU. I have a query that runs entirely \nout of cache, and it takes ages. This much I have tried to fix and failed \nso far.\n\nWhat I would now like to do is to tell postgres about it, so that the \nEXPLAINs are correct. Is there a way to tell Postgres that an operator has \na large CPU cost? I can tell it what the join selectivity is, but I can't \nfind anything about CPU cost.\n\nMatthew\n\n-- \n Unfortunately, university regulations probably prohibit me from eating\n small children in front of the lecture class.\n -- Computer Science Lecturer\n", "msg_date": "Wed, 30 Sep 2009 18:12:29 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "CPU cost of operators" }, { "msg_contents": "On Wed, Sep 30, 2009 at 1:12 PM, Matthew Wakeling <[email protected]> wrote:\n>\n> Episode umpteen of the ongoing saga with my GiST indexes.\n>\n> For some reason, GiST uses loads of CPU. I have a query that runs entirely\n> out of cache, and it takes ages. This much I have tried to fix and failed so\n> far.\n>\n> What I would now like to do is to tell postgres about it, so that the\n> EXPLAINs are correct. Is there a way to tell Postgres that an operator has a\n> large CPU cost? I can tell it what the join selectivity is, but I can't find\n> anything about CPU cost.\n\nNot that I know of, but seems like it would be a reasonable extension.\n\n...Robert\n", "msg_date": "Wed, 30 Sep 2009 16:13:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU cost of operators" }, { "msg_contents": "On Wed, Sep 30, 2009 at 4:13 PM, Robert Haas <[email protected]> wrote:\n> On Wed, Sep 30, 2009 at 1:12 PM, Matthew Wakeling <[email protected]> wrote:\n>>\n>> Episode umpteen of the ongoing saga with my GiST indexes.\n>>\n>> For some reason, GiST uses loads of CPU. I have a query that runs entirely\n>> out of cache, and it takes ages. This much I have tried to fix and failed so\n>> far.\n>>\n>> What I would now like to do is to tell postgres about it, so that the\n>> EXPLAINs are correct. Is there a way to tell Postgres that an operator has a\n>> large CPU cost? I can tell it what the join selectivity is, but I can't find\n>> anything about CPU cost.\n>\n> Not that I know of, but seems like it would be a reasonable extension.\n\nEr, wait... if you set the 'COST' parameter for the backing function,\ndoes that work?\n.\n...Robert\n", "msg_date": "Wed, 30 Sep 2009 16:14:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU cost of operators" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Er, wait... if you set the 'COST' parameter for the backing function,\n> does that work?\n\nIt's supposed to...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Sep 2009 16:35:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU cost of operators " }, { "msg_contents": "On Wed, 30 Sep 2009, Robert Haas wrote:\n> Er, wait... if you set the 'COST' parameter for the backing function,\n> does that work?\n\nAh, right. I was looking at CREATE OPERATOR, not CREATE FUNCTION.\n\nThanks,\n\nMatthew\n\n-- \n Bashir: The point is, if you lie all the time, nobody will believe you, even\n when you're telling the truth. (RE: The boy who cried wolf)\n Garak: Are you sure that's the point, Doctor?\n Bashir: What else could it be? -- Star Trek DS9\n Garak: That you should never tell the same lie twice. -- Improbable Cause\n", "msg_date": "Thu, 1 Oct 2009 10:47:39 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU cost of operators" } ]
[ { "msg_contents": "Hello,\n\nwe're currently facing long running insert statements with durations ~15sec (far\ntoo long...) and I've no idea what's the reason for this issue.\n\nAn \"explain analyze\" of the insert statement tells me:\n############ SNIP #########################################################\nbegin;\nBEGIN\ntisys=# explain analyze INSERT INTO \"Transport\" (\"MasterCurrency\",\n\"Timestamp\", \"Id\", \"Number\", \"DeliveryNumbers\", \"Plant\", \"Weight\",\n\"WeightNet\", \"WeightTara\", \"WeightUnit\", \"Length\", \"LengthUnit\", \"Volume\",\n\"VolumeUnit\", \"Distance\", \"DistanceUnit\", \"Zone\",\n\"TransportationMeansDescription\", \"Archived\", \"Compressed\",\n\"TransportationModeId\", \"TransportationMeansId\", \"SId\",\n\"CId\", \"TransportAssignmentModeId\", \"TransportStatusId\",\n\"TransportBookingStatusId\", \"CGId\") VALUES ('EUR', '2009-09-25\n11:44:04.000000', 136, '432', '516', 'Standard', '16000', '0', '0', 'kg', '9',\n'm', '22', 'cbm', '0', '', '0', 'Tautliner', FALSE, FALSE, 2, 2400, 6138, 11479,\n10, 60, NULL,NULL);\n QUERY PLAN\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.05 rows=1 width=0) (actual time=0.026..0.027 rows=1 loops=1)\n Trigger NotReceivedTransport_Delete: time=24658.394 calls=1\n Trigger for constraint transport_c_id: time=69.148 calls=1\n Trigger for constraint transport_cg_id: time=0.152 calls=1\n Trigger for constraint transport_s_id: time=0.220 calls=1\n Trigger for constraint transport_transportassignmentmode_id: time=0.369 calls=1\n Trigger for constraint transport_transportationmeans_id: time=0.344 calls=1\n Trigger for constraint transport_transportationmode_id: time=0.296 calls=1\n Trigger for constraint transport_transportstatus_id: time=0.315 calls=1\n Trigger for constraint transport_transportbookingstatus_id: time=0.006 calls=1\n Total runtime: 25453.821 ms\n(11 rows)\n\ntisys=# rollback;\nROLLBACK\n############# SNIP END #####################################################\n\nThis output tells me that the trigger is the problem, but inside the trigger\nthere's only one delete statement =>\n############## SNIP ########################################################\nCREATE OR REPLACE FUNCTION \"NotReceivedTransport_Delete\"()\n RETURNS trigger AS\n$BODY$\nBEGIN\n IF (NEW.\"TransportStatusId\" = 60) THEN\n DELETE FROM \"NotReceivedTransport\" WHERE \"SId\" =\nNEW.\"SId\" AND \"CId\" = NEW.\"CId\" AND \"ShipperTransportNumber\" = NEW.\"Number\";\n END IF;\n RETURN NEW;\nEND;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100;\n############### SNIP END ####################################################\n\nafterwards I tried to explain analyze this delete statement and got =>\n############## SNIP ########################################################\nexplain analyze DELETE FROM \"NotReceivedTransport\" WHERE\n\"SId\" = 11479 AND \"CId\" = 11479 AND\n\"ShipperTransportNumber\" = '100432';\n QUERY PLAN\n----------------------------------------------------------------------------------\n Bitmap Heap Scan on \"NotReceivedTransport\" (cost=20.35..3939.16 rows=1\nwidth=6) (actual time=94.625..94.625 rows=0 loops=1)\n Recheck Cond: (\"CId\" = 11479)\n Filter: ((\"SId\" = 11479) AND ((\"ShipperTransportNumber\")::text\n= '100432'::text))\n -> Bitmap Index Scan on notreceivedtransport_index_cid\n(cost=0.00..20.35 rows=1060 width=0) (actual time=2.144..2.144 rows=6347 loops=1)\n Index Cond: (\"CarrierCustomerId\" = 11479)\n Total runtime: 94.874 ms\n(6 rows)\n############## SNIP END ####################################################\n\n\nI'm quite sure that the difference from 94ms (explain of the delete statement)\nto 24s (duration in the trigger) is not only due to some overhead in trigger\nhandling...but I've no idea what else we can check..?!?\n\nWe're using postgresql 8.3.7 on openSuse 10.3 64bit and a snippet of vmstat\nlooks like:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\nr b swpd free buff cache si so bi bo in cs us sy id wa\n3 0 856 114472 62800 7530800 0 0 53 54 0 0 4 0 93 3\n1 1 856 114264 62820 7530920 0 0 56 956 2641 2822 15 0 80 5\n1 1 856 114264 62828 7530936 0 0 8 632 2433 2586 16 0 80 3\n0 0 856 114140 62836 7530944 0 0 0 424 1980 2085 14 0 82 3\n0 0 856 114140 62840 7530952 0 0 0 488 1915 2008 4 0 92 4\n3 0 856 114264 62852 7530972 0 0 40 700 2009 2149 16 0 80 4\n2 0 856 113456 62864 7530992 0 0 8 724 2576 2669 6 1 90 3\n2 0 856 132664 62872 7531008 0 0 8 804 2580 2764 19 1 76 3\n2 1 856 132180 62876 7531020 0 0 0 760 2545 2782 19 1 77 4\n1 0 856 124896 62888 7531044 0 0 24 512 2943 3128 9 1 86 3\n1 0 856 121948 62900 7531240 0 0 232 716 2005 2063 5 1 88 6\n\nwaiting for I/O doesn't seem to be the problem..?!?!\n\nany help appreciated....GERD....\n", "msg_date": "Thu, 01 Oct 2009 09:20:09 +0200", "msg_from": "=?ISO-8859-2?Q?Gerd_K=F6nig?= <[email protected]>", "msg_from_op": true, "msg_subject": "long running insert statement" }, { "msg_contents": "On Thu, 1 Oct 2009, Gerd König wrote:\n> Trigger NotReceivedTransport_Delete: time=24658.394 calls=1\n\nYeah, it's pretty obvious this is the problem.\n\n> explain analyze DELETE FROM \"NotReceivedTransport\" WHERE\n> \"SId\" = 11479 AND \"CId\" = 11479 AND\n> \"ShipperTransportNumber\" = '100432';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------\n> Bitmap Heap Scan on \"NotReceivedTransport\" (cost=20.35..3939.16 rows=1\n> width=6) (actual time=94.625..94.625 rows=0 loops=1)\n> Recheck Cond: (\"CId\" = 11479)\n> Filter: ((\"SId\" = 11479) AND ((\"ShipperTransportNumber\")::text\n> = '100432'::text))\n> -> Bitmap Index Scan on notreceivedtransport_index_cid\n> (cost=0.00..20.35 rows=1060 width=0) (actual time=2.144..2.144 rows=6347 loops=1)\n> Index Cond: (\"CarrierCustomerId\" = 11479)\n> Total runtime: 94.874 ms\n> (6 rows)\n\nMaybe it's cached this time.\n\nIn any case, you have a bitmap index scan which is fetching 6347 rows and \nthen filtering that down to zero. Assuming one seek per row, that means \n6347 disc seeks, which is about 3.8 ms per seek - better than you would \nexpect from a disc. This means that the time taken is quite reasonable for \nwhat you are asking it to do.\n\nTo fix this, I suggest creating an index on NotReceivedTransport(SId, CId, \nShipperTransportNumber). Then, the index will be able to immediately see \nthat there are no rows to delete.\n\nMatthew\n\n-- \n \"We have always been quite clear that Win95 and Win98 are not the systems to\n use if you are in a hostile security environment.\" \"We absolutely do recognize\n that the Internet is a hostile environment.\" Paul Leach <[email protected]>", "msg_date": "Thu, 1 Oct 2009 11:15:05 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long running insert statement" }, { "msg_contents": "=?ISO-8859-2?Q?Gerd_K=F6nig?= <[email protected]> writes:\n> I'm quite sure that the difference from 94ms (explain of the delete statement)\n> to 24s (duration in the trigger) is not only due to some overhead in trigger\n> handling...but I've no idea what else we can check..?!?\n\nThere are two possible explanations for the time difference:\n\n1. The second time around, the relevant rows were already in cache.\n\n2. You might not actually be testing the same plan. The query that's\nbeing executed by the trigger function is parameterized. The manual\nequivalent would look about like this:\n\nprepare foo(int,int,text) as\nDELETE FROM \"NotReceivedTransport\" WHERE \"SId\" =\n$1 AND \"CId\" = $2 AND \"ShipperTransportNumber\" = $3;\n\nexplain analyze execute foo(11479,11479,'100432');\n\n(Note that I'm guessing as to the parameter data types.)\n\nIt seems possible that without knowledge of the exact Cid value being\nsearched for, the planner would choose not to use the index on that\ncolumn. As Matthew already noted, this index is pretty marginal for\nthis query anyway, and the planner might well only want to use it for\nless-common Cid values.\n\nI agree with Matthew's solution --- an index better adapted to this\nquery will probably be worth its maintenance overhead. But if you\nwant to understand the behavior you were seeing in trying to\ninvestigate, I think it's one of the two issues above.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Oct 2009 10:53:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long running insert statement " }, { "msg_contents": "Hello Matthew, hello Tom,\n\nthanks for your reply.\n...and yes, the hint with the newly created index solved the problem.\n\nkind regards...GERD...\n\nTom Lane wrote:\n> =?ISO-8859-2?Q?Gerd_K=F6nig?= <[email protected]> writes:\n>> I'm quite sure that the difference from 94ms (explain of the delete statement)\n>> to 24s (duration in the trigger) is not only due to some overhead in trigger\n>> handling...but I've no idea what else we can check..?!?\n> \n> There are two possible explanations for the time difference:\n> \n> 1. The second time around, the relevant rows were already in cache.\n> \n> 2. You might not actually be testing the same plan. The query that's\n> being executed by the trigger function is parameterized. The manual\n> equivalent would look about like this:\n> \n> prepare foo(int,int,text) as\n> DELETE FROM \"NotReceivedTransport\" WHERE \"SId\" =\n> $1 AND \"CId\" = $2 AND \"ShipperTransportNumber\" = $3;\n> \n> explain analyze execute foo(11479,11479,'100432');\n> \n> (Note that I'm guessing as to the parameter data types.)\n> \n> It seems possible that without knowledge of the exact Cid value being\n> searched for, the planner would choose not to use the index on that\n> column. As Matthew already noted, this index is pretty marginal for\n> this query anyway, and the planner might well only want to use it for\n> less-common Cid values.\n> \n> I agree with Matthew's solution --- an index better adapted to this\n> query will probably be worth its maintenance overhead. But if you\n> want to understand the behavior you were seeing in trying to\n> investigate, I think it's one of the two issues above.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n/===============================\\\n| Gerd Kďż˝nig\n| - Infrastruktur -\n|\n| TRANSPOREON GmbH\n| Pfarrer-Weiss-Weg 12\n| DE - 89077 Ulm\n|\n| Tel: +49 [0]731 16906 16\n| Fax: +49 [0]731 16906 99\n| Web: www.transporeon.com\n|\n\\===============================/\n\n\n\nBleiben Sie auf dem Laufenden.\nJetzt den Transporeon Newsletter abonnieren!\nhttp://www.transporeon.com/unternehmen_newsletter.shtml\n\n\nTRANSPOREON GmbH, Amtsgericht Ulm, HRB 722056\nGeschďż˝ftsf.: Peter Fďż˝rster, Roland Hďż˝tzl, Marc-Oliver Simon\n", "msg_date": "Fri, 02 Oct 2009 07:44:12 +0200", "msg_from": "=?ISO-8859-2?Q?Gerd_K=F6nig?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: long running insert statement" } ]
[ { "msg_contents": "In some docs i read that shared buffer must be increased based on the\nmaximum dataset size. For my scenario the dataset size is relative small\nless then a Gb, but database# handled by a server is nearly 200db per\nserver and average connection# to server will be >500 (approx 5/per each\nDB). So for this scenario will increase in shared buffer will increase the\nperformance.\nFYI: RAM in 8GB\n\nRegards,\nArvind S\n\n*\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison*\n\nIn some docs i read that shared buffer must be increased based on the maximum dataset size. For my scenario the dataset size is relative small less then a Gb, but database#  handled by a server is nearly 200db per server and average connection# to server will be >500 (approx 5/per each DB). So for this scenario will increase in shared buffer will increase the performance. \n\nFYI: RAM in 8GBRegards,Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison", "msg_date": "Thu, 1 Oct 2009 13:41:33 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Confusion on shared buffer" }, { "msg_contents": "On Thu, Oct 1, 2009 at 4:11 AM, S Arvind <[email protected]> wrote:\n> In some docs i read that shared buffer must be increased based on the\n> maximum dataset size. For my scenario the dataset size is relative small\n> less then a Gb, but database#  handled by a server is nearly 200db per\n> server and average connection# to server will be >500 (approx 5/per each\n> DB). So for this scenario will increase in shared buffer will increase the\n> performance.\n> FYI: RAM in 8GB\n\nI'm pretty sure that won't help to make shared buffers larger than the\nsize of your entire cluster.\n\n...Robert\n", "msg_date": "Fri, 2 Oct 2009 15:19:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusion on shared buffer" }, { "msg_contents": "Thanks Robert,\n So for our scenario what is the most important factor to be noted\nfor performance.\n\n-Arvind S\n\n\nOn Sat, Oct 3, 2009 at 12:49 AM, Robert Haas <[email protected]> wrote:\n\n> On Thu, Oct 1, 2009 at 4:11 AM, S Arvind <[email protected]> wrote:\n> > In some docs i read that shared buffer must be increased based on the\n> > maximum dataset size. For my scenario the dataset size is relative small\n> > less then a Gb, but database# handled by a server is nearly 200db per\n> > server and average connection# to server will be >500 (approx 5/per each\n> > DB). So for this scenario will increase in shared buffer will increase\n> the\n> > performance.\n> > FYI: RAM in 8GB\n>\n> I'm pretty sure that won't help to make shared buffers larger than the\n> size of your entire cluster.\n>\n> ...Robert\n>\n\nThanks Robert,         So for our scenario what is the most important factor to be noted for performance.-Arvind SOn Sat, Oct 3, 2009 at 12:49 AM, Robert Haas <[email protected]> wrote:\nOn Thu, Oct 1, 2009 at 4:11 AM, S Arvind <[email protected]> wrote:\n\n\n> In some docs i read that shared buffer must be increased based on the\n> maximum dataset size. For my scenario the dataset size is relative small\n> less then a Gb, but database#  handled by a server is nearly 200db per\n> server and average connection# to server will be >500 (approx 5/per each\n> DB). So for this scenario will increase in shared buffer will increase the\n> performance.\n> FYI: RAM in 8GB\n\nI'm pretty sure that won't help to make shared buffers larger than the\nsize of your entire cluster.\n\n...Robert", "msg_date": "Sat, 3 Oct 2009 11:41:41 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confusion on shared buffer" }, { "msg_contents": "On Sat, Oct 3, 2009 at 2:11 AM, S Arvind <[email protected]> wrote:\n> Thanks Robert,\n>          So for our scenario what is the most important factor to be noted\n> for performance.\n\nTough to say without benchmarking, but if you have a lot of small\ndatabases that easily fit in RAM, and a lot of concurrent connections,\nI would think you'd want to spend your hardware $ on maximizing the\nnumber of cores.\n\nBut there are many in this forum who have much more experience with\nthese things than me, so take that with a grain of salt...\n\n(You might also want to look at consolidating some of those databases\n- maybe use one database with multiple schemas - that would probably\nhelp performance significantly.)\n\n...Robert\n", "msg_date": "Sat, 3 Oct 2009 21:02:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusion on shared buffer" }, { "msg_contents": "On Sun, Oct 4, 2009 at 6:32 AM, Robert Haas <[email protected]> wrote:\n\n> On Sat, Oct 3, 2009 at 2:11 AM, S Arvind <[email protected]> wrote:\n> > Thanks Robert,\n> > So for our scenario what is the most important factor to be\n> noted\n> > for performance.\n>\n> Tough to say without benchmarking, but if you have a lot of small\n> databases that easily fit in RAM, and a lot of concurrent connections,\n> I would think you'd want to spend your hardware $ on maximizing the\n> number of cores.\n>\n> But there are many in this forum who have much more experience with\n> these things than me, so take that with a grain of salt...\n>\n> (You might also want to look at consolidating some of those databases\n> - maybe use one database with multiple schemas - that would probably\n> help performance significantly.)\n>\n>\nI am not sure I understand the reasoning behind it! As long as they are\ndifferent objects, how would it help performance if tables are stored in\nseparate schema, or in separate databases; or are you referring to the\ndifference in size of system tables and the performance improvement\nresulting from keeping all metadata in a single catalog.\n\nBest regards,\n-- \nLets call it Postgres\n\ngurjeet[.singh]@EnterpriseDB.com\n\nEnterpriseDB http://www.enterprisedb.com\n\nsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\n\n\nMail sent from my BlackLaptop device\n\nOn Sun, Oct 4, 2009 at 6:32 AM, Robert Haas <[email protected]> wrote:\nOn Sat, Oct 3, 2009 at 2:11 AM, S Arvind <[email protected]> wrote:\n> Thanks Robert,\n>          So for our scenario what is the most important factor to be noted\n> for performance.\n\nTough to say without benchmarking, but if you have a lot of small\ndatabases that easily fit in RAM, and a lot of concurrent connections,\nI would think you'd want to spend your hardware $ on maximizing the\nnumber of cores.\n\nBut there are many in this forum who have much more experience with\nthese things than me, so take that with a grain of salt...\n\n(You might also want to look at consolidating some of those databases\n- maybe use one database with multiple schemas - that would probably\nhelp performance significantly.)\nI am not sure I understand the reasoning behind it! As long as they are different objects, how would it help performance if tables are stored in separate schema, or in separate databases; or are you referring to the difference in size of system tables and the performance improvement resulting from keeping all metadata in a single catalog.\nBest regards,-- Lets call it Postgresgurjeet[.singh]@EnterpriseDB.comEnterpriseDB      http://www.enterprisedb.comsingh.gurjeet@{ gmail | yahoo }.com\n\nTwitter/Skype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Sun, 4 Oct 2009 18:58:46 +0530", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusion on shared buffer" }, { "msg_contents": "On Sun, Oct 4, 2009 at 9:28 AM, Gurjeet Singh <[email protected]> wrote:\n> On Sun, Oct 4, 2009 at 6:32 AM, Robert Haas <[email protected]> wrote:\n>>\n>> On Sat, Oct 3, 2009 at 2:11 AM, S Arvind <[email protected]> wrote:\n>> > Thanks Robert,\n>> >          So for our scenario what is the most important factor to be\n>> > noted\n>> > for performance.\n>>\n>> Tough to say without benchmarking, but if you have a lot of small\n>> databases that easily fit in RAM, and a lot of concurrent connections,\n>> I would think you'd want to spend your hardware $ on maximizing the\n>> number of cores.\n>>\n>> But there are many in this forum who have much more experience with\n>> these things than me, so take that with a grain of salt...\n>>\n>> (You might also want to look at consolidating some of those databases\n>> - maybe use one database with multiple schemas - that would probably\n>> help performance significantly.)\n>>\n>\n> I am not sure I understand the reasoning behind it! As long as they are\n> different objects, how would it help performance if tables are stored in\n> separate schema, or in separate databases; or are you referring to the\n> difference in size of system tables and the performance improvement\n> resulting from keeping all metadata in a single catalog.\n\nYep, if it's all one database you don't have all the different system\ncatalog page competing for space in shared buffers, which is bad\nespecially for a case like this where the individual databases are\nquite small. I haven't actually measured this myself (so you\nshouldn't believe me), but there have been other comments about this\non this list from time to time. Large numbers of databases seem to\nhurt the stats collector, too.\n\n...Robert\n", "msg_date": "Sun, 4 Oct 2009 15:30:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusion on shared buffer" } ]
[ { "msg_contents": "Hi everyone,\n What is the best Linux flavor for server which runs postgres alone.\nThe postgres must handle greater number of database around 200+. Performance\non speed is the vital factor.\nIs it FreeBSD, CentOS, Fedora, Redhat xxx??\n\n-Arvind S\n\nHi everyone,      What is the best Linux flavor for server which runs postgres alone. The postgres must handle greater number of database around 200+. Performance on speed is the vital factor.Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n-Arvind S", "msg_date": "Thu, 1 Oct 2009 15:16:59 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Best suiting OS" }, { "msg_contents": "On Thu, 1 Oct 2009, S Arvind wrote:\n>       What is the best Linux flavor for server which runs postgres alone. The postgres\n> must handle greater number of database around 200+. Performance on speed is the vital\n> factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nFor starters, FreeBSD isn't Linux at all. Secondly, the other three \noptions you have listed are all Red Hat versions - not much variety there.\n\nI know that some people swear by Red Hat, but I personally have had \nnothing but trouble from such installations, especially when trying to \nupgrade to a newer version of Postgres. We have just switched a machine \nfrom Red Hat to Debian because of this very problem. I can heartily \nrecommend Debian, as it distributes new versions of Postgres very quickly \nand allows you to continuously upgrade without any problems. For \ncomparison, with Red Hat, you will need to upgrade to a whole new \ndistribution whenever you want updated software, which is a much bigger \nundertaking.\n\nAs far as performance goes (and someone will probably contradict me here) \nall the different Linux distributions should be roughly equivalent, as \nlong as they are up to date and well tuned.\n\nMatthew\n\n-- \n And why do I do it that way? Because I wish to remain sane. Um, actually,\n maybe I should just say I don't want to be any worse than I already am.\n - Computer Science Lecturer", "msg_date": "Thu, 1 Oct 2009 11:00:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "S Arvind wrote:\n> Hi everyone,\n> What is the best Linux flavor for server which runs postgres \n> alone. The postgres must handle greater number of database around 200+. \n> Performance on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n> \n> -Arvind S\n> \nI do not know the others, and doubt it makes much difference.\n\nMy feelings are that if you are running a server, you want a stable one, not\none that is continuously updated for latest and greatest features. Hence I\nwould rule out Fedora and its ilk.\n\nIf you run Redhat, I would advise the most recent; i.e., Red Hat Enterprise\nLinux 5, since they do not add any new features and only correct errors.\nCentOS is the same as Red Hat, but you probably get better support from Red\nHat if you need it -- though you pay for it.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 05:55:01 up 14:54, 4 users, load average: 5.40, 5.25, 5.04\n", "msg_date": "Thu, 01 Oct 2009 06:04:00 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "For example i mentioned few linux name only, if any one linux other then\nthis also u can prescribe. Our servers needs to be more stable one, as Jean\ntold we cant upgrade our OS often. For the Postgres8.3 can u tell me the\nbest one. Factor is purely performance and i/o since our storage server\nseprate .\n\nThanks,\nArvind S\n\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison\n\n\nOn Thu, Oct 1, 2009 at 3:34 PM, Jean-David Beyer <[email protected]>wrote:\n\n> S Arvind wrote:\n>\n>> Hi everyone,\n>> What is the best Linux flavor for server which runs postgres alone.\n>> The postgres must handle greater number of database around 200+. Performance\n>> on speed is the vital factor.\n>> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>>\n>> -Arvind S\n>>\n>> I do not know the others, and doubt it makes much difference.\n>\n> My feelings are that if you are running a server, you want a stable one,\n> not\n> one that is continuously updated for latest and greatest features. Hence I\n> would rule out Fedora and its ilk.\n>\n> If you run Redhat, I would advise the most recent; i.e., Red Hat Enterprise\n> Linux 5, since they do not add any new features and only correct errors.\n> CentOS is the same as Red Hat, but you probably get better support from Red\n> Hat if you need it -- though you pay for it.\n>\n> --\n> .~. Jean-David Beyer Registered Linux User 85642.\n> /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n> /( )\\ Shrewsbury, New Jersey http://counter.li.org\n> ^^-^^ 05:55:01 up 14:54, 4 users, load average: 5.40, 5.25, 5.04\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nFor example i mentioned few linux name only, if any one linux other then this also u can prescribe. Our servers needs to be more stable one, as Jean told we cant upgrade our OS often. For the Postgres8.3 can u tell me the best one. Factor is purely performance and i/o since our storage server seprate .\nThanks,Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison\nOn Thu, Oct 1, 2009 at 3:34 PM, Jean-David Beyer <[email protected]> wrote:\nS Arvind wrote:\n\nHi everyone,\n      What is the best Linux flavor for server which runs postgres alone. The postgres must handle greater number of database around 200+. Performance on speed is the vital factor.\nIs it FreeBSD, CentOS, Fedora, Redhat xxx??\n\n-Arvind S\n\n\nI do not know the others, and doubt it makes much difference.\n\nMy feelings are that if you are running a server, you want a stable one, not\none that is continuously updated for latest and greatest features. Hence I\nwould rule out Fedora and its ilk.\n\nIf you run Redhat, I would advise the most recent; i.e., Red Hat Enterprise\nLinux 5, since they do not add any new features and only correct errors.\nCentOS is the same as Red Hat, but you probably get better support from Red\nHat if you need it -- though you pay for it.\n\n-- \n  .~.  Jean-David Beyer          Registered Linux User 85642.\n  /V\\  PGP-Key: 9A2FC99A         Registered Machine   241939.\n /( )\\ Shrewsbury, New Jersey    http://counter.li.org\n ^^-^^ 05:55:01 up 14:54, 4 users, load average: 5.40, 5.25, 5.04\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 1 Oct 2009 15:40:43 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Matthew Wakeling wrote:\n\n> For starters, FreeBSD isn't Linux at all. Secondly, the other three \n> options you have listed are all Red Hat versions - not much variety there.\n\nThe main difference between those is that Fedora tries to be the latest and\ngreatest. This implies that you must reinstall or update about every six\nmonths -- because if you do not wish to do that, you would be running a more\nstable distribution.\n> \n> I know that some people swear by Red Hat, but I personally have had \n> nothing but trouble from such installations,\n\nI have no trouble with Red Hat Enterprise Linux or its equivalent, CentOS.\nHowever the following point is valid:\n\n> especially when trying to \n> upgrade to a newer version of Postgres.\n\nThe theory with the Red Hat Enterprise Linux distribution is that you run\nwith what comes with it. All the stuff that comes with it is guaranteed to\nwork together. Red Hat do not add features, change any interfaces, etc. Then\nthey support it for 7 years. I.e., if it works for you at the beginning, it\nwill work the entire 7 years if you wish.\n\nIf you want newer features, you must upgrade, as with other distributions,\nbut their upgrades come only about every year and a half, and if you do not\nneed the new features, you just do not bother. I started with RHEL 3,\nskipped RHEL 4 (except I run CentOS 4 on my old machine), and am now running\nRHEL 5. Consequently, I am running postgresql-8.1.11-1.el5_1.1 and it works\nfine, as it did when I started. They fix only errors, not performance\nproblems or new features.\n\n> We have just switched a machine \n> from Red Hat to Debian because of this very problem. I can heartily \n> recommend Debian, as it distributes new versions of Postgres very quickly \n> and allows you to continuously upgrade without any problems. For \n> comparison, with Red Hat, you will need to upgrade to a whole new \n> distribution whenever you want updated software, which is a much bigger \n> undertaking.\n> \n\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 06:05:01 up 15:04, 4 users, load average: 6.01, 5.69, 5.33\n", "msg_date": "Thu, 01 Oct 2009 06:14:40 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Thanks Jean,\n So from the discussion is it true that performance will be same across\nall newly upgraded linux is it?\n\nThanks,\nArvind S\n*\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison\n*\n\nOn Thu, Oct 1, 2009 at 3:44 PM, Jean-David Beyer <[email protected]>wrote:\n\n> Matthew Wakeling wrote:\n>\n> For starters, FreeBSD isn't Linux at all. Secondly, the other three\n>> options you have listed are all Red Hat versions - not much variety there.\n>>\n>\n> The main difference between those is that Fedora tries to be the latest and\n> greatest. This implies that you must reinstall or update about every six\n> months -- because if you do not wish to do that, you would be running a\n> more\n> stable distribution.\n>\n>>\n>> I know that some people swear by Red Hat, but I personally have had\n>> nothing but trouble from such installations,\n>>\n>\n> I have no trouble with Red Hat Enterprise Linux or its equivalent, CentOS.\n> However the following point is valid:\n>\n> especially when trying to upgrade to a newer version of Postgres.\n>>\n>\n> The theory with the Red Hat Enterprise Linux distribution is that you run\n> with what comes with it. All the stuff that comes with it is guaranteed to\n> work together. Red Hat do not add features, change any interfaces, etc.\n> Then\n> they support it for 7 years. I.e., if it works for you at the beginning, it\n> will work the entire 7 years if you wish.\n>\n> If you want newer features, you must upgrade, as with other distributions,\n> but their upgrades come only about every year and a half, and if you do not\n> need the new features, you just do not bother. I started with RHEL 3,\n> skipped RHEL 4 (except I run CentOS 4 on my old machine), and am now\n> running\n> RHEL 5. Consequently, I am running postgresql-8.1.11-1.el5_1.1 and it works\n> fine, as it did when I started. They fix only errors, not performance\n> problems or new features.\n>\n> We have just switched a machine from Red Hat to Debian because of this\n>> very problem. I can heartily recommend Debian, as it distributes new\n>> versions of Postgres very quickly and allows you to continuously upgrade\n>> without any problems. For comparison, with Red Hat, you will need to upgrade\n>> to a whole new distribution whenever you want updated software, which is a\n>> much bigger undertaking.\n>>\n>>\n>\n>\n> --\n> .~. Jean-David Beyer Registered Linux User 85642.\n> /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n> /( )\\ Shrewsbury, New Jersey http://counter.li.org\n> ^^-^^ 06:05:01 up 15:04, 4 users, load average: 6.01, 5.69, 5.33\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks Jean,     So from the discussion is it true that performance will be same across all newly upgraded linux is it?Thanks,Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"\n\n-Thomas Edison\nOn Thu, Oct 1, 2009 at 3:44 PM, Jean-David Beyer <[email protected]> wrote:\nMatthew Wakeling wrote:\n\n\nFor starters, FreeBSD isn't Linux at all. Secondly, the other three options you have listed are all Red Hat versions - not much variety there.\n\n\nThe main difference between those is that Fedora tries to be the latest and\ngreatest. This implies that you must reinstall or update about every six\nmonths -- because if you do not wish to do that, you would be running a more\nstable distribution.\n\n\nI know that some people swear by Red Hat, but I personally have had nothing but trouble from such installations,\n\n\nI have no trouble with Red Hat Enterprise Linux or its equivalent, CentOS.\nHowever the following point is valid:\n\n\nespecially when trying to upgrade to a newer version of Postgres.\n\n\nThe theory with the Red Hat Enterprise Linux distribution is that you run\nwith what comes with it. All the stuff that comes with it is guaranteed to\nwork together. Red Hat do not add features, change any interfaces, etc. Then\nthey support it for 7 years. I.e., if it works for you at the beginning, it\nwill work the entire 7 years if you wish.\n\nIf you want newer features, you must upgrade, as with other distributions,\nbut their upgrades come only about every year and a half, and if you do not\nneed the new features, you just do not bother. I started with RHEL 3,\nskipped RHEL 4 (except I run CentOS 4 on my old machine), and am now running\nRHEL 5. Consequently, I am running postgresql-8.1.11-1.el5_1.1 and it works\nfine, as it did when I started. They fix only errors, not performance\nproblems or new features.\n\n\nWe have just switched a machine from Red Hat to Debian because of this very problem. I can heartily recommend Debian, as it distributes new versions of Postgres very quickly and allows you to continuously upgrade without any problems. For comparison, with Red Hat, you will need to upgrade to a whole new distribution whenever you want updated software, which is a much bigger undertaking.\n\n\n\n\n\n-- \n  .~.  Jean-David Beyer          Registered Linux User 85642.\n  /V\\  PGP-Key: 9A2FC99A         Registered Machine   241939.\n /( )\\ Shrewsbury, New Jersey    http://counter.li.org\n ^^-^^ 06:05:01 up 15:04, 4 users, load average: 6.01, 5.69, 5.33\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 1 Oct 2009 16:03:24 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "> Hi everyone,\n>       What is the best Linux flavor for server which runs postgres alone.\n> The postgres must handle greater number of database around 200+. Performance\n> on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nAs others mention FreeBSD is somewhat different from the others. I\npersonally prefer FreeBSD because that it what I do best. If you don't\nhave any prior experiences with FreeBSD/Linux spent some time\ninstalling them and install some ports/apps. Try to become aquainted\nwith the update tools using the command line interface, csup on\nFreeBSD, apt on debian/ubuntu.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 1 Oct 2009 12:37:10 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": ">-----Original Message-----\n>From: [email protected] [mailto:pgsql-performance-\n>\n>> Hi everyone,\n>>       What is the best Linux flavor for server which runs postgres alone.\n>> The postgres must handle greater number of database around 200+.\n>Performance\n>> on speed is the vital factor.\n>> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>\n>As others mention FreeBSD is somewhat different from the others. I\n>personally prefer FreeBSD because that it what I do best. If you don't\n>have any prior experiences with FreeBSD/Linux spent some time\n>installing them and install some ports/apps. Try to become aquainted\n>with the update tools using the command line interface, csup on\n>FreeBSD, apt on debian/ubuntu.\n\nI'm running Postgres on NetBSD and RHEL4. I haven't noticed any particular differences in Postgres performance due to the OS, but then again I haven't performed any kind of formal benchmarks, nor am I really stressing the database all that much (most of the time).\nMy preference for OS to run is NetBSD, because I'm most familiar with it and there have been some fairly significant recent focus on performance improvements. If you're really worried about getting the best performance I think you're just going to have to try a few different OSes and see if you notice a difference.\n\nbtw, do you mean 200+ databases in a single postgres server, or that many different postgres servers? Running 200 different servers sounds like it might be problematic on any OS due to the amount of shared memory that'll need to be allocated.\n\neric\n", "msg_date": "Thu, 1 Oct 2009 09:56:40 -0500", "msg_from": "\"Haszlakiewicz, Eric\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Jean-David Beyer <[email protected]> writes:\n> The theory with the Red Hat Enterprise Linux distribution is that you run\n> with what comes with it. All the stuff that comes with it is guaranteed to\n> work together. Red Hat do not add features, change any interfaces, etc. Then\n> they support it for 7 years. I.e., if it works for you at the beginning, it\n> will work the entire 7 years if you wish.\n\nYeah, RHEL is intended to be a stable application platform: once you set\nup your server, it will \"just keep working\"; you should not have to\nworry whether updates will break your application.\n\nIt is not entirely a coincidence that this is exactly the attitude\nPostgres takes towards our back branches ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Oct 2009 11:12:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS " }, { "msg_contents": "Eric thanks. And its not 200 differnet server , its only single pg8.3\nhandling 200+ dbs.\n\nArvind S\n\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison\n\n\nOn Thu, Oct 1, 2009 at 8:26 PM, Haszlakiewicz, Eric\n<[email protected]>wrote:\n\n> >-----Original Message-----\n> >From: [email protected] [mailto:pgsql-performance-\n> >\n> >> Hi everyone,\n> >> What is the best Linux flavor for server which runs postgres\n> alone.\n> >> The postgres must handle greater number of database around 200+.\n> >Performance\n> >> on speed is the vital factor.\n> >> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n> >\n> >As others mention FreeBSD is somewhat different from the others. I\n> >personally prefer FreeBSD because that it what I do best. If you don't\n> >have any prior experiences with FreeBSD/Linux spent some time\n> >installing them and install some ports/apps. Try to become aquainted\n> >with the update tools using the command line interface, csup on\n> >FreeBSD, apt on debian/ubuntu.\n>\n> I'm running Postgres on NetBSD and RHEL4. I haven't noticed any particular\n> differences in Postgres performance due to the OS, but then again I haven't\n> performed any kind of formal benchmarks, nor am I really stressing the\n> database all that much (most of the time).\n> My preference for OS to run is NetBSD, because I'm most familiar with it\n> and there have been some fairly significant recent focus on performance\n> improvements. If you're really worried about getting the best performance I\n> think you're just going to have to try a few different OSes and see if you\n> notice a difference.\n>\n> btw, do you mean 200+ databases in a single postgres server, or that many\n> different postgres servers? Running 200 different servers sounds like it\n> might be problematic on any OS due to the amount of shared memory that'll\n> need to be allocated.\n>\n> eric\n>\n\nEric thanks. And its not 200 differnet server , its only single pg8.3 handling 200+ dbs.Arvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"\n\n-Thomas Edison\nOn Thu, Oct 1, 2009 at 8:26 PM, Haszlakiewicz, Eric <[email protected]> wrote:\n>-----Original Message-----\n>From: [email protected] [mailto:pgsql-performance-\n>\n>> Hi everyone,\n>>       What is the best Linux flavor for server which runs postgres alone.\n>> The postgres must handle greater number of database around 200+.\n>Performance\n>> on speed is the vital factor.\n>> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>\n>As others mention FreeBSD is somewhat different from the others. I\n>personally prefer FreeBSD because that it what I do best. If you don't\n>have any prior experiences with FreeBSD/Linux spent some time\n>installing them and install some ports/apps. Try to become aquainted\n>with the update tools using the command line interface, csup on\n>FreeBSD, apt on debian/ubuntu.\n\nI'm running Postgres on NetBSD and RHEL4.  I haven't noticed any particular differences in Postgres performance due to the OS, but then again I haven't performed any kind of formal benchmarks, nor am I really stressing the database all that much (most of the time).\n\n\nMy preference for OS to run is NetBSD, because I'm most familiar with it and there have been some fairly significant recent focus on performance improvements.  If you're really worried about getting the best performance I think you're just going to have to try a few different OSes and see if you notice a difference.\n\nbtw, do you mean 200+ databases in a single postgres server, or that many different postgres servers?   Running 200 different servers sounds like it might be problematic on any OS due to the amount of shared memory that'll need to be allocated.\n\neric", "msg_date": "Thu, 1 Oct 2009 23:16:03 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, 1 Oct 2009, Matthew Wakeling wrote:\n\n> For comparison, with Red Hat, you will need to upgrade to a whole new \n> distribution whenever you want updated software, which is a much bigger \n> undertaking.\n\nThis is somewhat true for larger packages, but it's not the case for \nPostgreSQL. You certainly can grab newer RPMs from \nhttps://projects.commandprompt.com/public/pgcore and install them. Those \npackages are at least as current as their Debian counterparts, and in some \ncases the RPMs have been months ahead (I recall there being quite a lag \nbefore Debian supported PG 8.3 for example).\n\nIt can be a bit tricky to replace the RHEL version of PostgreSQL with \nthose, I wrote a walkthrough that covers the non-obvious parts at \nhttp://www.westnet.com/~gsmith/content/postgresql/pgrpm.htm\n\nThe result won't be officially supported by RedHat, but in practice that's \nno worse than what you get from the Debian versions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 1 Oct 2009 14:03:36 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, 1 Oct 2009, S Arvind wrote:\n\n> What is the best Linux flavor for server which runs postgres alone. The \n> postgres must handle greater number of database around 200+. Performance \n> on speed is the vital factor.\n\nGenerally the fastest Linux distribution is whichever one is built using \nthe most recent Linux kernel. The downside of that is that the latest \nkernel versions are likely to have nasty bugs in them.\n\nRedHat and CentOS are both based on the 2.6.18 kernel, with some pieces of \nlater ones patched in there too. That's pretty old at this point. What I \ndo on a lot of systems is install RHEL/CentOS, then compile my own kernel \nstarting with the same options RedHat did and use that one. Then I can \nadjust exactly how close I am to the latest Linux kernel while still \ngetting the benefit of the stable package set the rest of the distribution \noffers. Right now I'm using 2.6.30 on a few such systems, that's one rev \nback from current (2.6.31 is the latest stable kernel release but it's \nstill scary new).\n\nThe standard kernel on current Ubuntu systems right now is based on \n2.6.28, that performs pretty well too.\n\nIf what you want is optimized speed and you don't care about any other \ntrade-off, Gentoo Linux is probably what you want. You better make sure \nyou have considerable Linux support expertise available for your project \nthough. That's probably true of any high-performance setup though. Many \nof the ways you can make a database system faster are complicated to setup \nand require more work to keep going than if you just settled for the \nslower but more popular implementation.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 1 Oct 2009 14:16:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, 1 Oct 2009, S Arvind wrote:\n\n> Hi everyone,\n> What is the best Linux flavor for server which runs postgres alone.\n> The postgres must handle greater number of database around 200+. Performance\n> on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nas noted by others *BSD is not linux\n\namong the linux options, the best option is the one that you as a company \nare most comfortable with (and have the support/upgrade processes in place \nfor)\n\nin general, the newer the kernel the better things will work, but it's far \nbetter to have an 'old' system that your sysadmins understand well and can \nsupport easily than a 'new' system that they don't know well and therefor \nhave trouble supporting.\n\nDavid Lang\n", "msg_date": "Thu, 1 Oct 2009 12:10:35 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "I'm a BSD license fan, but, I don't know much about *BSD otherwise (except\nthat many advocates say it runs PG very nicely).\nOn the Linux side, unless your a dweeb, go with a newer, popular & well\nsupported release for Production. IMHO, that's RHEL 5.x or CentOS 5.x. Of\ncourse the latest SLES & UBuntu schtuff are also fine.\n\nIn other words, unless you've got a really good reason for it, stay away\nfrom Fedora & OpenSuse for production usage.\n\nOn Thu, Oct 1, 2009 at 3:10 PM, <[email protected]> wrote:\n\n> On Thu, 1 Oct 2009, S Arvind wrote:\n>\n> Hi everyone,\n>> What is the best Linux flavor for server which runs postgres alone.\n>> The postgres must handle greater number of database around 200+.\n>> Performance\n>> on speed is the vital factor.\n>> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>>\n>\n> as noted by others *BSD is not linux\n>\n> among the linux options, the best option is the one that you as a company\n> are most comfortable with (and have the support/upgrade processes in place\n> for)\n>\n> in general, the newer the kernel the better things will work, but it's far\n> better to have an 'old' system that your sysadmins understand well and can\n> support easily than a 'new' system that they don't know well and therefor\n> have trouble supporting.\n>\n> David Lang\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI'm a BSD license fan, but, I don't know much about *BSD otherwise (except that many advocates say it runs PG very nicely).On the Linux side, unless your a dweeb, go with a newer, popular & well supported release for Production.  IMHO, that's RHEL 5.x or CentOS 5.x.  Of course the latest SLES & UBuntu schtuff are also fine.\nIn other words, unless you've got a really good reason for it, stay away from Fedora & OpenSuse for production usage.On Thu, Oct 1, 2009 at 3:10 PM, <[email protected]> wrote:\nOn Thu, 1 Oct 2009, S Arvind wrote:\n\n\nHi everyone,\n     What is the best Linux flavor for server which runs postgres alone.\nThe postgres must handle greater number of database around 200+. Performance\non speed is the vital factor.\nIs it FreeBSD, CentOS, Fedora, Redhat xxx??\n\n\nas noted by others *BSD is not linux\n\namong the linux options, the best option is the one that you as a company are most comfortable with (and have the support/upgrade processes in place for)\n\nin general, the newer the kernel the better things will work, but it's far better to have an 'old' system that your sysadmins understand well and can support easily than a 'new' system that they don't know well and therefor have trouble supporting.\n\nDavid Lang\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 1 Oct 2009 15:44:09 -0400", "msg_from": "Denis Lussier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, Oct 1, 2009 at 3:46 AM, S Arvind <[email protected]> wrote:\n> Hi everyone,\n>       What is the best Linux flavor for server which runs postgres alone.\n> The postgres must handle greater number of database around 200+. Performance\n> on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nYou say you want speed, but I'm betting stability is more important\nthan speed, as a machine that crashes several times a month but is\nlightening fast is usually a bad choice for a db server.\n\nI run Centos 5.3 with an older kernel. There's a bug in the areca\ndrivers after the 2.6.18-92.el5 kernel that redhat has no apparent\ninterest in fixing. But with that kernel, I have a machine that's\npretty fast and stable:\n\nuname -a\nLinux db1 2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64\nx86_64 x86_64 GNU/Linux\nuptime\n 13:44:38 up 416 days, 28 min, 6 users, load average: 28.94, 31.93, 32.50\n\nIt's twin was the one I tested the newer kernel on and had the hangs\nwith. It now runs the same older kernel as well.\n", "msg_date": "Thu, 1 Oct 2009 13:46:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "> > Hi everyone,\n> >       What is the best Linux flavor for server\n> which runs postgres alone.\n> > The postgres must handle greater number of database\n> around 200+. Performance\n> > on speed is the vital factor.\n> > Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\n\nI see nobody suggesting Solaris... ZFS is supposed to be a very nice FS...\n\n\n \n", "msg_date": "Fri, 2 Oct 2009 01:21:52 -0700 (PDT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "> I see nobody suggesting Solaris... ZFS is supposed to be a\n> very nice FS...\n\n(of course, it's not a linux flavor...) \n\n\n \n", "msg_date": "Fri, 2 Oct 2009 02:28:17 -0700 (PDT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, 1 Oct 2009, Greg Smith wrote:\n> On Thu, 1 Oct 2009, Matthew Wakeling wrote:\n>> For comparison, with Red Hat, you will need to upgrade to a whole new \n>> distribution whenever you want updated software, which is a much bigger \n>> undertaking.\n>\n> This is somewhat true for larger packages, but it's not the case for \n> PostgreSQL. You certainly can grab newer RPMs from \n> https://projects.commandprompt.com/public/pgcore and install them.\n\nThe reason we switched that machine to Debian was due to the \npostgresql-devel package being missing for Red Hat. We need that package \nin order to install some of our more interesting extensions. A quick look \nat http://yum.pgsqlrpms.org/8.4/fedora/fedora-9-x86_64/ indicates that \nthis package is still missing. Because Debian includes Postgres in its \nmain package repository and uses automated build tools, Debian is unlikely \nto suffer the same fate.\n\nMatthew\n\n-- \n I'm always interested when [cold callers] try to flog conservatories.\n Anyone who can actually attach a conservatory to a fourth floor flat\n stands a marginally better than average chance of winning my custom.\n (Seen on Usenet)\n", "msg_date": "Fri, 2 Oct 2009 12:39:39 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Fri, 2009-10-02 at 12:39 +0100, Matthew Wakeling wrote:\n> The reason we switched that machine to Debian was due to the \n> postgresql-devel package being missing for Red Hat. We need that\n> package in order to install some of our more interesting extensions. A\n> quick look at http://yum.pgsqlrpms.org/8.4/fedora/fedora-9-x86_64/\n> indicates that this package is still missing.\n\nNeither Red Hat, nor this repository is missing that package. The link\nyou gave is the only exception in 14 different combinations, because of\na wrong major cleanup that I performed 2 weeks ago. I re-uploaded -devel\nto the repository.\n\nhttp://mirror.centos.org/centos/5.3/os/x86_64/CentOS/postgresql-devel-8.1.11-1.el5_1.1.i386.rpm\n\nSee this as an example to Red Hat (and CentOS), \n\n> Because Debian includes Postgres in its \n> main package repository and uses automated build tools, Debian is\n> unlikely to suffer the same fate.\n\nThis is nonsense. Both Debian, Fedora, Red Hat, CentOS and OpenSUSE has\nautomated build tools and QA.\n-- \nDevrim GÜNDÜZ, RHCE\nCommand Prompt - http://www.CommandPrompt.com \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Fri, 02 Oct 2009 15:30:29 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> The reason we switched that machine to Debian was due to the \n> postgresql-devel package being missing for Red Hat. We need that package \n> in order to install some of our more interesting extensions. A quick look \n> at http://yum.pgsqlrpms.org/8.4/fedora/fedora-9-x86_64/ indicates that \n> this package is still missing.\n\nYou switched OSes instead of complaining to the repository maintainer\nthat he'd forgotten a subpackage? You must have a lot of time on your\nhands.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Oct 2009 10:06:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS " }, { "msg_contents": "S Arvind wrote:\n> Hi everyone,\n> What is the best Linux flavor for server which runs postgres \n> alone. The postgres must handle greater number of database around \n> 200+. Performance on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>\n> -Arvind S\nWe use Arch Linux and love it. It does not have \"versions\" - you just \nkeep updating your install and never have to do a major version \nupgrade. It is a bare bones distribution with excellent package \nmanagement and repositories, virtually no distribution cruft, and a \nfantastic community/wiki/forum.\n\nAs a warning no one offers support for Arch that I know of and the \npackages are generally very current with the latest which is both a good \nand bad thing. For a production environment you have to be very careful \nabout when you do upgrades and preferably can test upgrades on QA \nmachines before running on production. You also want to make sure and \nexclude postgresql from updates so that it doesn't do something like \npull down 8.4 over an 8.3.x installation without you being backed up and \nready to restore. PostgreSQL is currently at 8.4.1 in their repositories.\n\nWith that disclaimer out of the way it is my favorite Linux distribution \nand I am running it on a couple dozens servers at the moment ranging \nfrom puny app servers to 8 core, 32GB+ RAM, 30-40 disk database \nservers. If you are comfortable with Linux it is worth checking out (on \nyour personal machine or QA environment first). I've run dozens of \ndistributions and this works well for us (a startup with nontrivial \nLinux experience). I imagine at a larger company it definitely would \nnot be an option.\n\nJoe Uhl\n", "msg_date": "Fri, 02 Oct 2009 10:23:09 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Fri, 2 Oct 2009, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> The reason we switched that machine to Debian was due to the\n>> postgresql-devel package being missing for Red Hat. We need that package\n>> in order to install some of our more interesting extensions. A quick look\n>> at http://yum.pgsqlrpms.org/8.4/fedora/fedora-9-x86_64/ indicates that\n>> this package is still missing.\n>\n> You switched OSes instead of complaining to the repository maintainer\n> that he'd forgotten a subpackage? You must have a lot of time on your\n> hands.\n\nCamel's back, straw.\n\nBesides, both I and our sysadmin are much more used to Debian. We were \ndealing with an old install of RH from our old sysadmin and couldn't be \nbothered to work out the Red Hat Way(tm). Much easier to un-switch OSes.\n\nMatthew\n\n-- \n I work for an investment bank. I have dealt with code written by stock\n exchanges. I have seen how the computer systems that store your money are\n run. If I ever make a fortune, I will store it in gold bullion under my\n bed. -- Matthew Crosby\n", "msg_date": "Fri, 2 Oct 2009 15:23:23 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS " }, { "msg_contents": "On 10/02/2009 10:23 AM, Matthew Wakeling wrote:\n> On Fri, 2 Oct 2009, Tom Lane wrote:\n>> You switched OSes instead of complaining to the repository maintainer\n>> that he'd forgotten a subpackage? You must have a lot of time on your\n>> hands.\n>\n> Camel's back, straw.\n>\n> Besides, both I and our sysadmin are much more used to Debian. We were \n> dealing with an old install of RH from our old sysadmin and couldn't \n> be bothered to work out the Red Hat Way(tm). Much easier to un-switch \n> OSes.\n\n... until you move on and leave the company with some hacked up Debian \ninstalls that nobody knows how to manage.\n\nJust throwing that out there. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Fri, 02 Oct 2009 11:41:51 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "* Mark Mielke <[email protected]> [091002 11:41]:\n\n> ... until you move on and leave the company with some hacked up Debian \n> installs that nobody knows how to manage.\n\nCould be worse, they could leave a Redhat/CentOS box that *can't* be\nmanaged\n\nemacs anyone?\n\n/duck and run, promising not to post on this again....\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Fri, 2 Oct 2009 11:43:59 -0400", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS - now off topic" }, { "msg_contents": "On Thu, Oct 1, 2009 at 4:46 AM, S Arvind <[email protected]> wrote:\n>\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nFreeBSD isn't Linux.\nDon't run Fedora, it undergoes way too much Churn.\nNo real difference between CentOS and RedHat.\n\nI personally prefer openSUSE (or SLES/SLED if you want their\ncommerical offering). I find it faster, more up-to-date (but no\n\"churn\"), in general higher quality. I find postgresql *substantially*\nfaster on openSUSE than CentOS, but that's purely anecdotal and I\ndon't have any raw numbers to compare.\n\nopenSUSE 11.1 has 8.3.8 and 11.2 (not out yet - a few months) will have 8.4.X.\n\n-- \nJon\n", "msg_date": "Fri, 2 Oct 2009 11:36:21 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, Oct 1, 2009 at 4:46 AM, S Arvind <[email protected]> wrote:\n> Hi everyone,\n>       What is the best Linux flavor for server which runs postgres alone.\n> The postgres must handle greater number of database around 200+. Performance\n> on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nFreeBSD isn't Linux.\n\nI don't recommend that you run Fedora, it undergoes way too much churn.\n\nI don't find any real difference between CentOS and RedHat.\n\nI personally prefer openSUSE (or SLES/SLED if you want their\ncommerical offering). I find it faster, more up-to-date (but no\n\"churn\"), and in general higher quality - \"it just works\". I find\npostgresql *substantially* faster on openSUSE than CentOS, but that's\npurely anecdotal and I don't have any raw numbers to compare.\n\nopenSUSE 11.1 has 8.3.8 and 11.2 (not out yet - a few months) will have 8.4.X.\n\n--\nJon\n", "msg_date": "Fri, 2 Oct 2009 12:14:54 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Fri, 2 Oct 2009, Matthew Wakeling wrote:\n\n> The reason we switched that machine to Debian was due to the \n> postgresql-devel package being missing for Red Hat. We need that package \n> in order to install some of our more interesting extensions. A quick \n> look at http://yum.pgsqlrpms.org/8.4/fedora/fedora-9-x86_64/ indicates \n> that this package is still missing.\n\nSo you mention that on pgsql-performance, not even close to the most \npopular list here, and before I even read your e-mail the missing file is \nalready fixed because the maintainer reads this and noted a mistake (for \nthat old and and what is officially a quite unsupported Fedora version). \nThat seems like a pretty well supported PostgreSQL package set to me, no?\n\n> Besides, both I and our sysadmin are much more used to Debian. We were \n> dealing with an old install of RH from our old sysadmin and couldn't be \n> bothered to work out the Red Hat Way(tm). Much easier to un-switch OSes.\n\nAh, here we have your real reason. It's OK to say \"I don't like the Red \nHat Way and am more used to Debian\" and have reasons for why that is. \nYou don't need to sling FUD about the things you didn't even try seriously \nto support that. Packaging is hard work, and I've gotten one-off bad \npackages from everybody at some point, including Debian derived ones. \nThey don't have a magic wand that makes their packages immune to all \npossible human error for things the automated tests don't check.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 2 Oct 2009 13:17:48 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS " }, { "msg_contents": "On Thu, Oct 1, 2009 at 5:46 AM, S Arvind <[email protected]> wrote:\n> Hi everyone,\n>       What is the best Linux flavor for server which runs postgres alone.\n> The postgres must handle greater number of database around 200+. Performance\n> on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n\nI think for linux centos/redhat is probably the best bet. Debian\nwould be a good choice too especially if you have developers using\ndebian based system (like ubuntu) on the desktop. redhat gets the nod\nbecause of hardware support and they are the most serious about\nbackpatching fixes on enterprise releases.\n\nI know I'm in the minority here, but I _always_ compile postgresql\nmyself directly from official sources. It's easy enough and you never\nknow when you have to do an emergency patch or cassert build, etc.\n\nmerlin\n", "msg_date": "Fri, 2 Oct 2009 13:20:17 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Fri, 2 Oct 2009, Jon Nelson wrote:\n\n> I personally prefer openSUSE (or SLES/SLED if you want their\n> commerical offering). I find it faster, more up-to-date (but no\n> \"churn\"), and in general higher quality - \"it just works\". I find\n> postgresql *substantially* faster on openSUSE than CentOS, but that's\n> purely anecdotal and I don't have any raw numbers to compare.\n> openSUSE 11.1 has 8.3.8 and 11.2 (not out yet - a few months) will have 8.4.X.\n\nAs I was saying upthread, statements like this all need to be disclaimed \nwith the relative default kernel versions involved.\n\nopenSUSE 11.1=Kernel 2.6.27\nRHEL5=Kernel 2.6.18\n\nYou don't need to provide numbers; that kernel sure is faster. And I can \ngrab 2.6.27 or later for my CentOS systems, too, if I'm willing to live \noutside of the supported envelope.\n\nSUSE SLED also uses the 2.6.27 kernel, which wasn't too bleeding edge in \nMarch 2008 when SLED 11 came out. RHEL5 came out in March 2007. If RHEL6 \ncomes out before SUSE 12, they'll leapfrog ahead for a while. With \nversions aimed to live 5 years, you have to be aware of where in that \ncycle you are at any time, and right now RHEL5 is halfway through its \nlifetime already--and accordingly a bit behind something that came out a \nyear later as far as performance goes.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 2 Oct 2009 13:34:53 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Fri, 2 Oct 2009, Merlin Moncure wrote:\n\n> I know I'm in the minority here, but I _always_ compile postgresql\n> myself directly from official sources. It's easy enough and you never\n> know when you have to do an emergency patch or cassert build, etc.\n\nThat requires one take all of the security update responsibility yourself, \nwhich isn't a trade-off some people want. In a lot of situations, it's \njust plain easier to take minor point release updates through the main \npackage manager when there's a bug fix release, knowing that the project \npolicies will never slip something other than bug fixes into that. I \nthink many users will never do a patched build or even know what cassert \ntoggles, and really they'd shouldn't have to.\n\nThe trick I suggest people who use packaged builds get familiar with is \nknowing that if you run pg_config and look for the CONFIGURE line, you'll \nfind out exactly what options were used by the builder of the package you \nhave, when they compiled the server from source for that package. If \nyou've been using a packaged build, but now discovered you need to use a \nsource one instead, you should be able to grab the source, compile with \nthose same options, and get a compatible server out of it.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 2 Oct 2009 13:43:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Merlin Moncure <[email protected]> wrote: \n \n> I know I'm in the minority here, but I _always_ compile postgresql\n> myself directly from official sources. It's easy enough and you\n> never know when you have to do an emergency patch or cassert build,\n> etc.\n \nA minority, perhaps; but I'm there with you. ;-)\n \n-Kevin\n", "msg_date": "Fri, 02 Oct 2009 14:00:09 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On 10/02/2009 01:20 PM, Merlin Moncure wrote:\n> I know I'm in the minority here, but I _always_ compile postgresql\n> myself directly from official sources. It's easy enough and you never\n> know when you have to do an emergency patch or cassert build, etc.\n> \n\n+1\n\nI decided to do this as soon as I figured out that configuration options \n(such as how datetime is stored - float vs integer) could change from \nrelease to release. I much prefer to keep a stable PostgreSQL, and \nupgrade the OS underneath it. It's been several years with this model, \nand it's always been very simple to maintain. I recently documented the \ninstructions for another team and they fit within about 10 lines that \ncould be cut + pasted.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Fri, 02 Oct 2009 15:15:21 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> The trick I suggest people who use packaged builds get familiar with is \n> knowing that if you run pg_config and look for the CONFIGURE line, you'll \n> find out exactly what options were used by the builder of the package you \n> have, when they compiled the server from source for that package. If \n> you've been using a packaged build, but now discovered you need to use a \n> source one instead, you should be able to grab the source, compile with \n> those same options, and get a compatible server out of it.\n\nOne other point worth making here is that there are *no* open source\nOS distributions that intend to make package building difficult or\narcane, or that won't give you the full sources for what they built.\nIt's worth your time to learn how to do this on whatever system you\nprefer to use. Then, if you're ever in a situation where you really\nneed patch XYZ right now, you can easily add that patch to the package\nsources and rebuild a custom version that will play nicely within the\ndistro's package system --- right up to being automatically replaced\nwhen the next regular release does come out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Oct 2009 17:38:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> It's worth your time to learn how to do this on whatever system you\n> prefer to use. Then, if you're ever in a situation where you really\n> need patch XYZ right now, you can easily add that patch to the package\n> sources and rebuild a custom version that will play nicely within the\n> distro's package system --- right up to being automatically replaced\n> when the next regular release does come out.\n\nI recently had to do just that (local fix a contrib module failure,\npg_freespacemap). It looks like this when you're using debian:\n\n $ apt-get source postgresql-8.3\n $ cd postgresql-8.3-8.3.7\n $ cp /path/to/patch.diff debian/patches/13-pg_fsm.diff --- pick yourself next available id\n $ $EDITOR debian/changelog\n $ debuild\n\nNow you have a new .deb you want to distribute for upgrades (targeting\npreprod first, of course). The changelog editing is best done knowing\nwhat is a NMU and how to represent it there, and how version sorting\nworks; that's the key to automatic overwrite at next official minor\nupgrade, and it allows to put the version under your name, so that you\ncan GnuPG sign the packages at the end of the debuild process.\n\n http://www.debian.org/doc/developers-reference/pkgs.html#nmu\n\nRegards,\n-- \ndim\n", "msg_date": "Fri, 02 Oct 2009 23:58:15 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Denis Lussier wrote:\n> I'm a BSD license fan, but, I don't know much about *BSD otherwise\n> (except that many advocates say it runs PG very nicely).\n>\n> On the Linux side, unless your a dweeb, go with a newer, popular &\n> well supported release for Production. IMHO, that's RHEL 5.x or\n> CentOS 5.x. Of course the latest SLES & UBuntu schtuff are also fine.\n>\n> In other words, unless you've got a really good reason for it, stay\n> away from Fedora & OpenSuse for production usage.\n>\n> On Thu, Oct 1, 2009 at 3:10 PM, <[email protected] <mailto:[email protected]>>\n> wrote:\n>\n> On Thu, 1 Oct 2009, S Arvind wrote:\n>\n> Hi everyone,\n> What is the best Linux flavor for server which runs\n> postgres alone.\n> The postgres must handle greater number of database around\n> 200+. Performance\n> on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>\n>\n> as noted by others *BSD is not linux\n>\n> among the linux options, the best option is the one that you as a\n> company are most comfortable with (and have the support/upgrade\n> processes in place for)\n>\n> in general, the newer the kernel the better things will work, but\n> it's far better to have an 'old' system that your sysadmins\n> understand well and can support easily than a 'new' system that\n> they don't know well and therefor have trouble supporting.\n>\n> David Lang\n>\n>\nI am a particular fan of FreeBSD, and in some benchmarking I did between\nit and CentOS FreeBSD 7.x literally wiped the floor with the CentOS\nrelease I tried on IDENTICAL hardware. \nI also like the 3ware raid coprocessors - they work well, are fast, and\nI've had zero trouble with them.\n\n-- Karl\n\n\n\n\n\n\n\nDenis Lussier wrote:\nI'm a BSD license fan, but, I don't know much about *BSD\notherwise (except that many advocates say it runs PG very nicely).\n \n\nOn the Linux side, unless your a dweeb, go with a newer, popular\n& well supported release for Production.  IMHO, that's RHEL 5.x or\nCentOS 5.x.  Of course the latest SLES & UBuntu schtuff are also\nfine.\n\n\nIn other words, unless you've got a really good reason for it,\nstay away from Fedora & OpenSuse for production usage.\n\nOn Thu, Oct 1, 2009 at 3:10 PM, <[email protected]>\nwrote:\nOn\nThu, 1 Oct 2009, S Arvind wrote:\n\n\nHi everyone,\n    What is the best Linux flavor for server which runs postgres alone.\nThe postgres must handle greater number of database around 200+.\nPerformance\non speed is the vital factor.\nIs it FreeBSD, CentOS, Fedora, Redhat xxx??\n\n\nas noted by others *BSD is not linux\n\namong the linux options, the best option is the one that you as a\ncompany are most comfortable with (and have the support/upgrade\nprocesses in place for)\n\nin general, the newer the kernel the better things will work, but it's\nfar better to have an 'old' system that your sysadmins understand well\nand can support easily than a 'new' system that they don't know well\nand therefor have trouble supporting.\n\nDavid Lang\n\n\n\n\n\nI am a particular fan of FreeBSD, and in some benchmarking I did\nbetween it and CentOS FreeBSD 7.x literally wiped the floor with the\nCentOS release I tried on IDENTICAL hardware.  \nI also like the 3ware raid coprocessors - they work well, are fast, and\nI've had zero trouble with them.\n\n-- Karl", "msg_date": "Sat, 03 Oct 2009 21:35:26 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On 10/01/2009 03:44 PM, Denis Lussier wrote:\n> I'm a BSD license fan, but, I don't know much about *BSD otherwise \n> (except that many advocates say it runs PG very nicely).\n>\n> On the Linux side, unless your a dweeb, go with a newer, popular & \n> well supported release for Production. IMHO, that's RHEL 5.x or \n> CentOS 5.x. Of course the latest SLES & UBuntu schtuff are also fine.\n>\n> In other words, unless you've got a really good reason for it, stay \n> away from Fedora & OpenSuse for production usage.\n\nLots of conflicting opinions and results in this thread. Also, a lot of \nhand waving and speculation. :-)\n\nRHEL and CentOS are particular bad *right now*. See here:\n http://en.wikipedia.org/wiki/RHEL\n http://en.wikipedia.org/wiki/CentOS\n\nFor RHEL, look down to \"Release History\" and RHEL 5.3 based on \nLinux-2.6.18, released March, 2007. On the CentOS page you'll see it is \ndated April, 2007. CentOS is identical to RHEL on purpose, but always 1 \nto 6 months after the RHEL, since they take the RHEL source, re-build \nit, and then re-test it.\n\nLinux is up to Linux-2.6.31.1 right now:\n http://www.kernel.org/\n\nSo any comparisons between operating system *distributions* should be \nfair. Comparing a 2007 release to a 2009 release, for example, is not \nfair. RHEL / CentOS are basically out of the running right now, because \nthey are so old. However, RHEL 6.0 should be out in January or February, \n2010, at which point it will be relevant again.\n\nPersonally, I use Fedora, and my servers have been quite stable. One of \nour main web servers running Fedora:\n\n[mark@bambi]~% uptime\n 09:45:41 up 236 days, 10:07, 1 user, load average: 0.02, 0.04, 0.08\n\nIt was last rebooted as a scheduled reboot, not a crash. This isn't to \nsay Fedora will be stable for you - however, having used both RHEL and \nFedora, in many different configurations, I find RHEL being 2+ years \nbehind in terms of being patch-current means that it is NOT as stable on \nmodern hardware. Most recently, I installed RHEL on a 10 machine cluster \nof HP nodes, and RHEL would not detect the SATA controller out of the \nbox and reverted to some base PIO mode yielding 2 Mbyte/s disk speed. \nFedora was able to achieve 112 Mbyte/s on the same disks. Some twiddling \nof grub.conf allowed RHEL to achieve the same speed, but the point is \nthat there are two ways to de-stabilize a kernel. One is to use the \nleading edge, the other is to use the trailing edge. Using an operating \nsystem designed for 2007 on hardware designed in 2009 is a bad idea. \nUsing an operating system designed for 2009 on 2007 hardware might also \nbe a bad idea. Using a modern operating system on modern hardware that \nthe operating system was designed for will give you the best performance \npotential. In this case, Fedora 11 with Linux 2.6.30.8 is almost \nguaranteed to out-perform RHEL 5.3 with Linux 2.6.18. Where Fedora is \nwithin 1 to 6 months of the leading edge, Ubuntu is within 3 to 9 months \nof the leading edge, so Ubuntu will perform more similar to Fedora than \nRHEL.\n\nI've given up on the OS war. People use what they are comfortable with. \nComfort lasts until the operating systems screws a person over, at which \npoint they \"hate\" it, and switch to something else. It's about passion - \nnot about actual metrics, capability, reliability, or any of these other \nsupposed criteria. In my case, I'm comfortable with RedHat, because I've \nused it since the '90s, and because I've seen how they hire some of the \nbest open source developers, and contribute quality releases back to the \ncommunity, specifically including our very own Tom Lane and until \nrecently Alan Cox. Many of their employees have high fame in the open \nsource / Linux arena. As a result, my passion is for RedHat-based \nreleases. As I describe earlier, RHEL is too old right now - and I am \nlooking forward to RHEL 6.0 catching up to the rest of the world again. \nI've found Fedora to be a great alternative when I do need to be on the \nleading edge.\n\nSo - use what you want - but try not to pretend this isn't about \npassion. Even between BSD and Linux, I understand they re-use drivers, \nor at least knowledge. It's software that runs your computer. If one OS \ncan give you +2% performance over another for a 3 month period - big \ndeal - the available hardware can get a lot better than that +2% by \nspending a tiny bit more money, or just waiting 3 months. In the case of \nRHEL, waiting about 4 months will give you RHEL 6.0 which will be within \n3 to 9 months of the leading edge. If you are planning a new deployment \n- this might be something to consider. I suggest not going with RHEL 5 \nat this time...\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sun, 04 Oct 2009 10:05:02 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, 2009-10-04 at 10:05 -0400, Mark Mielke wrote:\n> \n> RHEL and CentOS are particular bad *right now*. See here:\n> http://en.wikipedia.org/wiki/RHEL\n> http://en.wikipedia.org/wiki/CentOS\n> \n> For RHEL, look down to \"Release History\" and RHEL 5.3 based on \n> Linux-2.6.18, released March, 2007. On the CentOS page you'll see it\n> is \n> dated April, 2007. CentOS is identical to RHEL on purpose, but always\n> 1 \n> to 6 months after the RHEL, since they take the RHEL source, re-build \n> it, and then re-test it.\n> \n> Linux is up to Linux-2.6.31.1 right now:\n> http://www.kernel.org/\n> \n> So any comparisons between operating system *distributions* should be \n> fair. Comparing a 2007 release to a 2009 release, for example, is not \n> fair. RHEL / CentOS are basically out of the running right now,\n> because \n> they are so old.\n\nSome people call these \"stability\" .\n-- \nDevrim GÜNDÜZ, RHCE\nCommand Prompt - http://www.CommandPrompt.com \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Sun, 04 Oct 2009 20:55:59 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On 10/04/2009 01:55 PM, Devrim GÜNDÜZ wrote:\n> On Sun, 2009-10-04 at 10:05 -0400, Mark Mielke wrote:\n> \n>> So any comparisons between operating system *distributions* should be\n>> fair. Comparing a 2007 release to a 2009 release, for example, is not\n>> fair. RHEL / CentOS are basically out of the running right now,\n>> because\n>> they are so old.\n>> \n> Some people call these \"stability\" .\n> \n\nNote that if a deployment is running well, and has been running well for \nyears, there is probably no reasonable justification to change it. My \ncomments are for *new* deployments. If somebody were to come to you with \na *new* deployment request, what would you recommend? Would you really \nrecommend RHEL 5 *today*?\n\nPairing 2009 hardware with a 2007 operating system is not what I would \ncall \"stable\". I can show you tickets where RedHat has specifically \nstate they *will not* update the kernel to better support new hardware, \nfor fear of breaking support for older hardware. These were for the \nhardware we have in our lab right now, installed in August, 2009. All 7 \nof the machines I installed RHEL 5.3 on *failed* to detect the SATA \ncontroller, and the install process completed at 2 Mbyte/s. After the \nmachines were up, I discovered the issue is a known issue, and that \nRedHat would not patch the problem, but instead suggested a change to \ngrub.conf. Is this stable? I don't think some people would call this \n\"stability\". It is the opposite - it is using an operating system on \nhardware that it was never tested or designed for.\n\nFor performance, another annoying thing - the RHEL 5.3 kernel doesn't \nsupport \"relatime\", so file reads are still scattering inode writes \nacross the file system for otherwise read-only loads. I think RHEL 5.4 \ndoesn't have this either. They finally back-ported FUSE - but did you \nknow their 2.6.18 kernel has something like 3000 patches that they \nmaintain against it? Does this not sound insane? How do you provide \neffective support for a kernel that has 3000 back ported patches against it?\n\nFor example - if I were to be engineering a new solution to be deployed \nin February, 2010, I would seriously consider RHEL 6.0, knowing that \nRHEL 6.0 will be \"stable\" on that hardware for several years. I would \nnot choose RHEL 5.3, because it would be three years old from the day it \nis deployed, and six years old three years from now.\n\nRHEL 5 was great for its time - 2007 and maybe the first half of 2008. \nRHEL 5 is now obsolete. For anybody who has tried to compile packages \nfor RHEL 5 to use leading software with RHEL 5, one will quickly find \nthat base libraries are missing from RHEL 5 that have existed in other \ndistributions for years. The one at the tip of my tongue right now is \nSubversion with GNOME keyring and/or KDE wallet. Those packages don't \nexist, or they are *very* old in RHEL 5. This means compiling my own \nGNOME keyring and dependent libraries, and possibly compiling the \nbinaries with static linkage to make sure they work properly. Why? In \nthis case, it's because Subversion 1.6.5 is out, but RHEL 5 comes with \nSubversion 1.4. Next on my tongue is PHP. Suffice it to say I've had \nenough problems trying to use recent software on a 3 year old operating \nsystem.\n\nThe point in all of the above is not \"don't use RHEL\". My point is that \nsoftware evolves, and \"what is best?\" is a moving target. RHEL will be \nbest 1 year out of 3. For the other 2 years? Something else may well be \nbetter. Anybody who tries to produce benchmarks to \"prove\" their choice \nof OS is better, will only be potentially right *today*. As little as 4 \nmonths from now, they might be wrong. RHEL 5 performance today is \nprobably among the worst of the distributions. RHEL 6 performance in \nFebruary, 2010 will probably start out life as one of the best - for a time.\n\nOk, that is probably enough ranting. I hope my point is valuable to \nsomebody.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sun, 04 Oct 2009 15:51:37 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, 4 Oct 2009, Mark Mielke wrote:\n\n> On 10/04/2009 01:55 PM, Devrim GÜNDÜZ wrote:\n>> On Sun, 2009-10-04 at 10:05 -0400, Mark Mielke wrote:\n>> \n>>> So any comparisons between operating system *distributions* should be\n>>> fair. Comparing a 2007 release to a 2009 release, for example, is not\n>>> fair. RHEL / CentOS are basically out of the running right now,\n>>> because\n>>> they are so old.\n>>> \n>> Some people call these \"stability\" .\n>> \n>\n> Note that if a deployment is running well, and has been running well for \n> years, there is probably no reasonable justification to change it. My \n> comments are for *new* deployments. If somebody were to come to you with a \n> *new* deployment request, what would you recommend? Would you really \n> recommend RHEL 5 *today*?\n>\n\nI use the following systems:\nRHEL4\nRHEL5\nFedora 11, latest updates and kernels.\n\nBasically, all systems are stable but all of them have \"problems\":\nRHEL4, RHEL5: Old, but proven systems, missing new features and still \nbugs that have already been fixed.\nFedora 11: Bleeding edge system, but with new bugs and systems are \ngetting even slower with newer kernels:-(\n\nExamples are major bugs in latest kernels in the CFQ scheduler:\nhttp://bugzilla.kernel.org/show_bug.cgi?id=13401#c16\n\nLinux kernel slows down:\nhttp://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/\nhttp://www.phoronix.com/scan.php?page=article&item=fedora_test_2008&num=4\n\nSo software will always have either less features or bugs :-) So it is \nalways a tradeoff between stability and bleeding edge.\n\nCiao,\nGerhard\n>From [email protected] Sun Oct 4 19:23:14 2009\nReceived: from maia.hub.org (unknown [200.46.204.183])\n\tby mail.postgresql.org (Postfix) with ESMTP id 57E676337CC\n\tfor <[email protected]>; Sun, 4 Oct 2009 19:23:14 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 59913-08\n for <[email protected]>;\n Sun, 4 Oct 2009 22:23:06 +0000 (UTC)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from bifrost.lang.hm (mail.lang.hm [64.81.33.126])\n\tby mail.postgresql.org (Postfix) with ESMTP id 2A5BE63249B\n\tfor <[email protected]>; Sun, 4 Oct 2009 19:23:05 -0300 (ADT)\nReceived: from asgard.lang.hm (asgard.lang.hm [10.0.0.100])\n\tby bifrost.lang.hm (8.13.4/8.13.4/Debian-3) with ESMTP id n94MMtLZ030883;\n\tSun, 4 Oct 2009 15:22:55 -0700\nDate: Sun, 4 Oct 2009 15:22:55 -0700 (PDT)\nFrom: [email protected]\nX-X-Sender: [email protected]\nTo: =?ISO-8859-15?Q?Devrim_G=DCND=DCZ?= <[email protected]>\ncc: Mark Mielke <[email protected]>,\n Denis Lussier <[email protected]>,\n S Arvind <[email protected]>, [email protected]\nSubject: Re: Best suiting OS\nIn-Reply-To: <[email protected]>\nMessage-ID: <[email protected]>\nReferences: <[email protected]> <[email protected]> <[email protected]> <[email protected]>\n <[email protected]>\nUser-Agent: Alpine 2.00 (DEB 1167 2008-08-23)\nMIME-Version: 1.0\nContent-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-2.599 tagged_above=-10 required=5\n tests=BAYES_00=-2.599\nX-Spam-Level: \nX-Archive-Number: 200910/80\nX-Sequence-Number: 35794\n\nOn Sun, 4 Oct 2009, Devrim G?ND?Z wrote:\n\n> On Sun, 2009-10-04 at 10:05 -0400, Mark Mielke wrote:\n>>\n>> RHEL and CentOS are particular bad *right now*. See here:\n>> http://en.wikipedia.org/wiki/RHEL\n>> http://en.wikipedia.org/wiki/CentOS\n>>\n>> For RHEL, look down to \"Release History\" and RHEL 5.3 based on\n>> Linux-2.6.18, released March, 2007. On the CentOS page you'll see it\n>> is\n>> dated April, 2007. CentOS is identical to RHEL on purpose, but always\n>> 1\n>> to 6 months after the RHEL, since they take the RHEL source, re-build\n>> it, and then re-test it.\n>>\n>> Linux is up to Linux-2.6.31.1 right now:\n>> http://www.kernel.org/\n>>\n>> So any comparisons between operating system *distributions* should be\n>> fair. Comparing a 2007 release to a 2009 release, for example, is not\n>> fair. RHEL / CentOS are basically out of the running right now,\n>> because\n>> they are so old.\n>\n> Some people call these \"stability\" .\n\n\"stability\" can mean many things to people\n\nin this case it does _not_ mean 'will it crash' type of stability.\n\nif you do not have a corporate standard distro and your sysadmins are \nequally comfortable (or uncomfortable and will get training) with all \ndistros, then the next question to decide is what your support \nrequirements are.\n\nsome companies insist on having a commercial support contract for anything \nthat they run. If that is the case then you will run RHEL, SuSE \nenterprise, Ubuntu (probably long term support version), or _possibly_ \ndebian with a support contract from a consutant shop.\n\nthe next layer (and I believe the more common case) is to not require a \ncommercial support contract, but do require that any software that you run \nbe supported by the project/distro with security patches.\n\nIn this case you need to look at the support timeframe and the release \ncycle.\n\nWith Fedora, the support timeframe is 12 months with a release every 6 \nmonths. Since you cannot (and should not) upgrade all your production \nsystems the day a new release comes out, this means that to costantly run \na supported version you must upgrade every 6 months.\n\nWith Ubuntu, the support timeframe is 18 months with a release every 6 \nmonths. This requires that you upgrade every 12 months to stay supported\n\nWith the enterprise/long-term-support versions, the support timeframe is 5 \nyears with a new release every 2-3 years. for these you will need to \nupgrade every new release, but can wait a year or two after a release has \nbeen made before doing the upgrade\n\nfor Debian stable the support timeframe is 1 year after the next release \n(which has historicly had an unpredictable schedule, they are trying to \nshift to a 2 year cycle, but they haven't actually done it yet). This \nallows you to wait several months after a release before having to do an \nupgrade.\n\nanother question you have to answer in terms of 'support' is what are the \nlimitations on replacing software that came with the distro with a new \nversion yourself. With the commercial support options the answer is \nusually 'if you upgrade something you void your support contract'. This \ncan be a significant problem if the distro cycle is long and you need a \nnew feature. note that for the kernel a 'new feature' may be support for \nnew hardware, so this can limit what hardware you can buy. If you insist \non buying your hardware from HP/Dell/IBM/etc this may not be too big a \nproblem as those hardware vendors also take a significant amount of time \nto 'certify' new things. For example, I have been looking for a new \nhardware vendor and just discovered that I cannot buy a system from \nHP/Dell/IBM that includes a SSD yet.\n\n\nIn my case I run Debian Stable on my servers, but identify 'important' \npackages that I will compile myself and keep up to date with the upstream \nproject rather than running what Debian ships. The kernel is one such \npackage (I don't necessarily run the latest kernel, but I watch closely \nfor the vunerabilities discovered and if any are relavent to the \nconfiguration I have, I upgrade). For dedicated database boxes Postgres is \nanother such package (if a system is primarily used for something else and \nthat package just needs a SQL database I may stick with the distro default)\n\nDavid Lang\n", "msg_date": "Sun, 4 Oct 2009 22:55:23 +0200 (CEST)", "msg_from": "Gerhard Wiesinger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, Oct 4, 2009 at 8:05 AM, Mark Mielke <[email protected]> wrote:\n> On 10/01/2009 03:44 PM, Denis Lussier wrote:\n>>\n>> I'm a BSD license fan, but, I don't know much about *BSD otherwise (except\n>> that many advocates say it runs PG very nicely).\n>>\n>> On the Linux side, unless your a dweeb, go with a newer, popular & well\n>> supported release for Production.  IMHO, that's RHEL 5.x or CentOS 5.x.  Of\n>> course the latest SLES & UBuntu schtuff are also fine.\n>>\n>> In other words, unless you've got a really good reason for it, stay away\n>> from Fedora & OpenSuse for production usage.\n>\n> Lots of conflicting opinions and results in this thread. Also, a lot of hand\n> waving and speculation. :-)\n>\n> RHEL and CentOS are particular bad *right now*. See here:\n>    http://en.wikipedia.org/wiki/RHEL\n>    http://en.wikipedia.org/wiki/CentOS\n>\n> For RHEL, look down to \"Release History\" and RHEL 5.3 based on Linux-2.6.18,\n> released March, 2007. On the CentOS page you'll see it is dated April, 2007.\n> CentOS is identical to RHEL on purpose, but always 1 to 6 months after the\n> RHEL, since they take the RHEL source, re-build it, and then re-test it.\n>\n> Linux is up to Linux-2.6.31.1 right now:\n>    http://www.kernel.org/\n>\n> So any comparisons between operating system *distributions* should be fair.\n> Comparing a 2007 release to a 2009 release, for example, is not fair. RHEL /\n> CentOS are basically out of the running right now, because they are so old.\n> However, RHEL 6.0 should be out in January or February, 2010, at which point\n> it will be relevant again.\n\nIt's completely fair. I have a Centos 5.2 machine with 430 or so days\nof uptime. I put it online, tested it and had it ready 430 days ago\nand it's crashed / hung exactly zero times since. You're right. It's\ncompletely unfair to compare some brand new kernel with unknown\nbugginess and stability issues to my 5.2 machine. Oh wait, you're\nsaying Centos is out of the running because it's old? That's 110%\nbackwards from the way a DBA should be thinking. First make it\nstable, THEN look for ways to make it performance. A DB server with\nstability issues is completely useless in a production environment.\n\n> Personally, I use Fedora, and my servers have been quite stable. One of our\n> main web servers running Fedora:\n\nIt's not that there can't be stable releases of FC, it's that it's not\nthe focus of that project. So, if you get lucky, great! I can't\nimagine running a production DB on FC, with it's short supported life\nspan and focus on development and not stability.\n\n> [mark@bambi]~% uptime\n>  09:45:41 up 236 days, 10:07,  1 user,  load average: 0.02, 0.04, 0.08\n>\n> It was last rebooted as a scheduled reboot, not a crash. This isn't to say\n> Fedora will be stable for you - however, having used both RHEL and Fedora,\n> in many different configurations, I find RHEL being 2+ years behind in terms\n> of being patch-current means that it is NOT as stable on modern hardware.\n\nIt is NOT 2+ years behind on patches. Any security issues or bugs\nthat show up get patched. Performance enhancing architectural changes\nget to wait for the next version (RHEL6).\n\n> Most recently, I installed RHEL on a 10 machine cluster of HP nodes, and\n> RHEL would not detect the SATA controller out of the box and reverted to\n> some base PIO mode yielding 2 Mbyte/s disk speed.\n\nYes, again, RHEL is focused on making stable, already production\ncapable hardware stay up, and stay supported for 5 to 7 years. It's a\ndifferent focus.\n\n> Fedora was able to achieve\n> 112 Mbyte/s on the same disks. Some twiddling of grub.conf allowed RHEL to\n> achieve the same speed, but the point is that there are two ways to\n> de-stabilize a kernel. One is to use the leading edge, the other is to use\n> the trailing edge.\n\nSorry, but your argument does not support this point. RHEL took\ntwiddling to work with the SATA ports.\n\n> Using an operating system designed for 2007 on hardware\n> designed in 2009 is a bad idea.\n\nDepends on whether or not it's using the latest and greatest or if\nRHEL has back patched support for newer hardware. Using RHEL on\nhardware that isn't officially supported is a bad idea. I agree about\nhalfway here, but not really. I have brand new machines running\nCentos 5.2 with no problem.\n\n>Using an operating system designed for 2009\n> on 2007 hardware might also be a bad idea.\n\nI think you have to go further back. Unlike Vista, Linux kernels tend\nto support older hardware for a very long time.\n\n> Using a modern operating system\n> on modern hardware that the operating system was designed for will give you\n> the best performance potential.\n\nTrue. But you have to test it hard and prove it's reliable first,\ncause it really doesn't matter how fast it crashes.\n\n> In this case, Fedora 11 with Linux 2.6.30.8\n> is almost guaranteed to out-perform RHEL 5.3 with Linux 2.6.18. Where Fedora\n> is within 1 to 6 months of the leading edge, Ubuntu is within 3 to 9 months\n> of the leading edge, so Ubuntu will perform more similar to Fedora than\n> RHEL.\n\nAnd on more exotic hardware (think 8 sockets of 6 core CPUs 128G RAM\nand $1400 RAID controllers) it's usually much less well tested yet,\nand more likely to bite you in the but. Nothing like waking up to a\nproduction issue where 46 of your 48 cores are stuck spinning at 100%\nwith some wonderful new kernel bug that's gonna take 6 months to get a\nfix to. I'll stick to 15% slower but never crashing. I'll test the\nnew OS for sure, but I won't trust it enough to put it into production\nuntil it's had a month or more of testing to prove it works. And with\nFC, that's a large chunk of its support lifespan.\n\n> I've given up on the OS war. People use what they are comfortable with.\n\nNo, I'm not a huge fan of RHEL. I'd prefer to run debian or ubuntu on\nmy servers, but both had some strange bug with multicore systems last\nyear, and I was forced to run Centos to get a stable machine. I'll\ntake a look at Solaris / Open Solaris when I get a chance. And\nFreeBSD now too. What I'm comfortable with will only matter if it's\nstable and reliable and supportable.\n\n> Comfort lasts until the operating systems screws a person over, at which\n> point they \"hate\" it, and switch to something else. It's about passion - not\n> about actual metrics, capability, reliability, or any of these other\n> supposed criteria.\n\nSorry, but you obviously don't know me, and I'm guessing a lot of\npeople on this list, well enough to make that judgement. Maybe for\nsome folks comfort is what matters. For professionals it's not that\nat all. It's about stability, reliability, and performance. Comfort\nis something we just hope we can get in the deal too.\n", "msg_date": "Sun, 4 Oct 2009 18:42:50 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "This is kind of OT, unless somebody really is concerned with \nunderstanding the + and - of distributions, and is willing to believe \nthe content of this thread as being accurate and objective... :-)\n\nOn 10/04/2009 08:42 PM, Scott Marlowe wrote:\n> On Sun, Oct 4, 2009 at 8:05 AM, Mark Mielke<[email protected]> wrote:\n> \n>> So any comparisons between operating system *distributions* should be fair.\n>> Comparing a 2007 release to a 2009 release, for example, is not fair. RHEL /\n>> CentOS are basically out of the running right now, because they are so old.\n>> However, RHEL 6.0 should be out in January or February, 2010, at which point\n>> it will be relevant again.\n>> \n> It's completely fair. I have a Centos 5.2 machine with 430 or so days\n> of uptime. I put it online, tested it and had it ready 430 days ago\n> and it's crashed / hung exactly zero times since. You're right. It's\n> completely unfair to compare some brand new kernel with unknown\n> bugginess and stability issues to my 5.2 machine. Oh wait, you're\n> saying Centos is out of the running because it's old? That's 110%\n> backwards from the way a DBA should be thinking. First make it\n> stable, THEN look for ways to make it performance. A DB server with\n> stability issues is completely useless in a production environment.\n> \n\nMaybe - if the only thing the server is running is PostgreSQL. Show of \nhands - how many users who ONLY install PostgreSQL, and use a bare \nminimum OS install, choosing to not run any other software? Now, how \nmany people ALSO run things like PHP, and require software more \nup-to-date than 3 years?\n\n>> Personally, I use Fedora, and my servers have been quite stable. One of our\n>> main web servers running Fedora:\n>> \n> It's not that there can't be stable releases of FC, it's that it's not\n> the focus of that project. So, if you get lucky, great! I can't\n> imagine running a production DB on FC, with it's short supported life\n> span and focus on development and not stability.\n> \n\nDepends on requirements. If the application is frozen in time and \ndoesn't change - sure. If the application keeps evolving and benefits \nfrom new base software - to require an upgrade every 12 months or more \nout of *application* requirements (not even counting OS support \nrequirements), may not be unusual. In any case - I'm not telling you to \nuse Fedora. I'm telling you that I use Fedora, and that RHEL 5 is too \nold from an application software perspective for anybody with a \nrequirement on more than a handful of base OS packages.\n\n>> [mark@bambi]~% uptime\n>> 09:45:41 up 236 days, 10:07, 1 user, load average: 0.02, 0.04, 0.08\n>>\n>> It was last rebooted as a scheduled reboot, not a crash. This isn't to say\n>> Fedora will be stable for you - however, having used both RHEL and Fedora,\n>> in many different configurations, I find RHEL being 2+ years behind in terms\n>> of being patch-current means that it is NOT as stable on modern hardware.\n>> \n> It is NOT 2+ years behind on patches. Any security issues or bugs\n> that show up get patched. Performance enhancing architectural changes\n> get to wait for the next version (RHEL6).\n> \n\nNot true. They only backport things considered \"important\". BIND might \nbe updated. Firefox might be updated. Most packages see no updates. Some \npackages see many updates. As I said - the kernel has something like \n3000 patches applied against it (although that's a small subset of all \nof the changes made to the upstream kernel). It is not true that \"any \nsecurity issues or bugs that show up get patched.\" *Some* security \nissues or bugs that show up get patched. If they patched everything \nback, they wouldn't have a stable release. Also, they cross this line - \nperformance enhancing architectural changes *are* made, but only if \nthere is sufficient demand from the customer base. XFS, EXT4, and FUSE \nmade it into RHEL 5.4. Even so, plenty of open source software is \ndifficult or impossible to compile for RHEL 5 without re-compiling base \npackages or bringing them in from another source. Try compiling \nSubversion 1.6.5 with GNOME keyring support on RHEL 5.3 - that was the \nlast one that busted us. In fact, this one is still open for us.\n\n\n>> Most recently, I installed RHEL on a 10 machine cluster of HP nodes, and\n>> RHEL would not detect the SATA controller out of the box and reverted to\n>> some base PIO mode yielding 2 Mbyte/s disk speed.\n>> \n> Yes, again, RHEL is focused on making stable, already production\n> capable hardware stay up, and stay supported for 5 to 7 years. It's a\n> different focus.\n> \n\nYes, exactly. Which is why a new deployment should align against the \nRHEL release. Deploying RHEL 5 today, when RHEL 5 is 3 years old, and \nRHEL 6 is coming out in 4 months, means that your RHEL 5 install is not \ngoing to have 5 to 7 years of life starting today. Half the support life \nhas almost elapsed at this point.\n\n>> Fedora was able to achieve\n>> 112 Mbyte/s on the same disks. Some twiddling of grub.conf allowed RHEL to\n>> achieve the same speed, but the point is that there are two ways to\n>> de-stabilize a kernel. One is to use the leading edge, the other is to use\n>> the trailing edge.\n>> \n> Sorry, but your argument does not support this point. RHEL took\n> twiddling to work with the SATA ports.\n> \n\nYou don't even know what the twiddling is - so not sure what you mean by \n\"support\". Do you think it is a well tested configuration from an RHEL \nperspective? If an OS can be considered to \"support\" hardware, if \ntwiddling in grub.conf will get it to work - then we may as well \nconclude that all Linux distributions \"support\" most or all hardware, \nand that they are all \"stable\". I think it \"works\", I don't think it is \nstable. Every time I upgrade the kernel, I have to watch and make sure \nthat it still works on next boot, and that grubby has propagated my \ngrub.conf hack forwards to the next install. Great. Makes me feel REAL \ncomfortable that my configuration is \"supported\". (Sarcastic in case it \nwasn't obvious).\n\n\n>> Using an operating system designed for 2007 on hardware\n>> designed in 2009 is a bad idea.\n>> \n> Depends on whether or not it's using the latest and greatest or if\n> RHEL has back patched support for newer hardware. Using RHEL on\n> hardware that isn't officially supported is a bad idea. I agree about\n> halfway here, but not really. I have brand new machines running\n> Centos 5.2 with no problem.\n> \n\nWe also had some HP machines with fancy video cards (dual-headed HDMI \nsomething or other) which I can't even get X to work on with RHEL or \nwith Fedora, and the machines are probably from ~2006. It depends on if \nyou stick to standard commodity hardware or fancy obscure stuff. In my \ncase, I just switched to run level 3 as these were only going to be used \nto test some install processes, and were not going to be used as \ngraphical desktops any ways.\n\nBack in 2006 when I decided on Fedora 8 for one of my servers, it was \nbecause RHEL/CentOS 4.x and Ubuntu would not detect my RAID controller \nproperly no matter what I tried, and Fedora worked out of the box.\n\nDifferent people have different experiences. Obviously, so research \nand/or lucky choices can improve the odds of a success - but it's sort \nof difficult for an operating system to predict what sort of hardware \nenhancement will come along two years from now, and prepare for it. Some \nenhancements like DDR3 come for free. Other enhancements, such as hyper \nthreading, turn out to result in disaster. (For those that didn't follow \nthe HT problems - many operating systems treated the HT as an \nindependent core, and it could result in two CPU-heavy processes being \nassigned to the same single core, leaving the other core idle)\n\nSATA/SAS is one that affected lots of platforms, and affected me as I \ndescribed. The BIOS usually has some sort of \"legacy IDE\" mode where it \nlet's the SATA pretend to be IDE - but this loses many of the benefits \nof SATA. For example, my Fedora installs are benefitting from NCQ, \nwhereas the RHEL 5.3 installs on the same hardware do not know how to \nuse NCQ.\n\nYou might be fine with CentOS 5.2 on your modern hardware - but I \nsuspect that your CentOS 5.2 is not making the absolute best use of your \nmodern hardware. For busy servers, I find \"relatime\" to be essential. \nWith CentOS 5.2 - you don't even have that option available to you, and \nit has nothing to do with hardware capabilities. You server is busy \nscattering writes across your platters for no benefit to you.\n\n>> Using a modern operating system\n>> on modern hardware that the operating system was designed for will give you\n>> the best performance potential.\n>> \n> True. But you have to test it hard and prove it's reliable first,\n> cause it really doesn't matter how fast it crashes.\n> \n\nI think it's prudent to \"test it hard\" no matter what configuration you \nhave selected - whether latest available hardware / software, or whether \nhardware and software that is 3 years old. It's your (and mine) \nreputation on the line. I certainly wouldn't say \"eh, the hardware and \nsoftware is two years old - of course it will work!\", and I bet you \nwouldn't either.\n\n>> In this case, Fedora 11 with Linux 2.6.30.8\n>> is almost guaranteed to out-perform RHEL 5.3 with Linux 2.6.18. Where Fedora\n>> is within 1 to 6 months of the leading edge, Ubuntu is within 3 to 9 months\n>> of the leading edge, so Ubuntu will perform more similar to Fedora than\n>> RHEL.\n>> \n> And on more exotic hardware (think 8 sockets of 6 core CPUs 128G RAM\n> and $1400 RAID controllers) it's usually much less well tested yet,\n> and more likely to bite you in the but. Nothing like waking up to a\n> production issue where 46 of your 48 cores are stuck spinning at 100%\n> with some wonderful new kernel bug that's gonna take 6 months to get a\n> fix to. I'll stick to 15% slower but never crashing. I'll test the\n> new OS for sure, but I won't trust it enough to put it into production\n> until it's had a month or more of testing to prove it works. And with\n> FC, that's a large chunk of its support lifespan.\n> \n\nYeah - I wouldn't use a \"deskop OS\" on server hardware without making \nsure it worked. I doubt Fedora or Ubuntu are heavily tested on exotic \nhardware such as you describe. For anybody not willing to do some \ntesting, you are definitely right.\n\n>> I've given up on the OS war. People use what they are comfortable with.\n>> \n> No, I'm not a huge fan of RHEL. I'd prefer to run debian or ubuntu on\n> my servers, but both had some strange bug with multicore systems last\n> year, and I was forced to run Centos to get a stable machine. I'll\n> take a look at Solaris / Open Solaris when I get a chance. And\n> FreeBSD now too. What I'm comfortable with will only matter if it's\n> stable and reliable and supportable.\n> \n\nUnfortunately, my ventures into Debian/Ubuntu had the same results. The \nlast time I tried, it wouldn't grok my desired RAID configuration. I \ngave it a real try too. Oh well.\n\nSolaris is good - but I think it doesn't have the ability to sustain \nmarket share. I'm not investing any of my efforts into it. Of course, I \nput FreeBSD into that category as well. Maybe I'm prejudiced. :-)\n\n>> Comfort lasts until the operating systems screws a person over, at which\n>> point they \"hate\" it, and switch to something else. It's about passion - not\n>> about actual metrics, capability, reliability, or any of these other\n>> supposed criteria.\n>> \n> Sorry, but you obviously don't know me, and I'm guessing a lot of\n> people on this list, well enough to make that judgement. Maybe for\n> some folks comfort is what matters. For professionals it's not that\n> at all. It's about stability, reliability, and performance. Comfort\n> is something we just hope we can get in the deal too.\n> \n\nIt might be venturing on insult, although it isn't meant to be - but I \nthink if any of you or us is honest about it - you and I have no idea \nwhether our installations are truly reliability. They work until they \ndon't. We cannot see the future. We draw up our own conclusions on what \ncriteria makes a \"reliable\" system. We read up on statistics and the \nreviews done by others and come to a conclusion. We do our best - but \nultimately, we're guessing. We're using our judgement - we're choosing \nwhat conclusion we are comfortable with. We're comfortable until our \nhardware / software betrays us, at which point we feel hurt, and we \ndecide between forgiving the betrayal or boycotting the configuration. :-)\n\nThe best advice of all is your advice earlier above. \"Test it hard\". Try \nand make it fail. Do a good enough job here, and it is more data / less \nfaith in our conclusions. :-)\n\nThe configuration can still fail, though. I would even expect it. I \nprefer to assume failure and focus on a contingency plan. I know my \ndisks and power supplies are going to fail. I know my database will be \ncorrupted. How do I minimize the impact when such a thing does occur, as \nit eventually will?\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sun, 04 Oct 2009 22:22:18 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Maybe OT, not sure Re: Best suiting OS" }, { "msg_contents": "Scott Marlowe wrote:\n>> Personally, I use Fedora, and my servers have been quite stable. One of our\n>> main web servers running Fedora:\n> \n> It's not that there can't be stable releases of FC, it's that it's not\n> the focus of that project. So, if you get lucky, great! I can't\n> imagine running a production DB on FC, with it's short supported life\n> span and focus on development and not stability.\n\nI use Fedora, and it was a mistake. I am looking for a better solution. Fedora has been very stable (uptime of 430 days on one server), BUT...\n\nRealistically, the lifetime of a release is as low as SIX MONTHS. We bought servers just as a FC release was coming out, and thought we'd be safe by going with the older, tested release. But six months after that, the next FC release came out, and the version we'd installed fell off the support list.\n\nIt takes almost no time with Fedora to run into big problems. Maybe there's a security release of ssh, you try to compile it, but it needs the latest gcc, but that's not available on your unsupported version of FC that you installed less than a year ago.\n\nOr maybe you need a new version of PHP to pass audit with your credit-card processor, but again, your FC release isn't supported so you have to uninstall the FC PHP, get the source, and compile PHP from scratch ... on and on it goes.\n\nFedora is a very nice project, but it's not suitable for production database servers.\n\nThis discussion has been very helpful indeed, and we appreciate everyone's contributions. I'm leaning towards a stable Debian release for our next upgrade, but there are several other well-reasoned suggestions here.\n\nCraig\n\n", "msg_date": "Sun, 04 Oct 2009 23:00:48 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, 2009-10-04 at 15:51 -0400, Mark Mielke wrote:\n> How do you provide effective support for a kernel that has 3000 back\n> ported patches against it?\n\nThis is again nonsense. Red Hat employs top kernel hackers. They do\nmaintain vanilla kernel. It is not hard for Red Hat to maintain their\n\"own version\" ;)\n-- \nDevrim GÜNDÜZ, RHCE\nCommand Prompt - http://www.CommandPrompt.com \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Mon, 05 Oct 2009 12:52:30 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Thu, 2009-10-01 at 15:16 +0530, S Arvind wrote:\n> What is the best Linux flavor for server which runs postgres\n> alone. The postgres must handle greater number of database around 200\n> +. Performance on speed is the vital factor.\n> Is it FreeBSD, CentOS, Fedora, Redhat xxx?? \n\nGo for Debian:\n* It is a free community, very active.\n* It is guaranteed to be upgradable.\n* Very easy to administrate via apt-get.\n\nChoose Debian SID or testing, which will provide the latest fixes.\n\nKind regards,\nJMP\n\n", "msg_date": "Mon, 05 Oct 2009 12:07:09 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, 2009-10-04 at 15:51 -0400, Mark Mielke wrote:\n> If somebody were to come to you with a *new* deployment request, what\n> would you recommend? Would you really recommend RHEL 5 *today*?\n\nWell, \"I\" would, and I do recommend people. RHEL5 is well-tested, and\nstable. Many hardware vendors support RHEL 5. The list goes on.\n\nIf I would want to live with bleeding edge, I'd use Fedora in my\nservers. Otherwise, linux 2.6.31 is not *that much* better than Red\nHat's 2.6.18. Actually the point is: Red Hat's 2.6.18 is not actually\n2.6.18.\n\nI also want to state that Red Hat is adding new features to each point\nrelease, as you know. It is not that old.\n\nWe have a customer that run ~ 1 hundred million transaction/hour , and\nthey run RHEL. We also have another one that runs about that one, and\nguess which OS they are running? \n\nIf I weren't using RHEL, I'd use Ubuntu. Nothing else.\n\n...and disclaimer: I don't work for Red Hat.\n-- \nDevrim GÜNDÜZ, RHCE\nCommand Prompt - http://www.CommandPrompt.com \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Mon, 05 Oct 2009 13:19:04 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Mon, 2009-10-05 at 12:07 +0200, Jean-Michel Pouré wrote:\n> Go for Debian:\n> * It is a free community, very active.\n\nWell, we need to state that this is not a unique feature.\n\n> * It is guaranteed to be upgradable.\n\nDepends. I had lots of issues with upgrade process in the past -- but\nyeah, it is much better than most distros.\n\n> * Very easy to administrate via apt-get.\n\nRight. apt is better than yum (in terms of speed).\n\n> Choose Debian SID or testing, which will provide the latest fixes.\n\nOne thing that I don't like about Debian is their update policy.\n\nIf upstream is releasing a security update, I'd like to be able to find\nnew packages as upstream announces updated sets. Yes, I'm talking about\nPostgreSQL here.\n-- \nDevrim GÜNDÜZ, RHCE\nCommand Prompt - http://www.CommandPrompt.com \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Mon, 05 Oct 2009 13:46:16 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, 2009-10-04 at 10:05 -0400, Mark Mielke wrote:\n> On 10/01/2009 03:44 PM, Denis Lussier wrote:\n> > I'm a BSD license fan, but, I don't know much about *BSD otherwise \n> > (except that many advocates say it runs PG very nicely).\n> > On the Linux side, unless your a dweeb, go with a newer, popular & \n> > well supported release for Production. IMHO, that's RHEL 5.x or \n> > CentOS 5.x. Of course the latest SLES & UBuntu schtuff are also fine.\n> > In other words, unless you've got a really good reason for it, stay \n> > away from Fedora & OpenSuse for production usage.\n> Lots of conflicting opinions and results in this thread. Also, a lot of \n> hand waving and speculation. :-)\n> RHEL and CentOS are particular bad *right now*. See here:\n> http://en.wikipedia.org/wiki/RHEL\n> http://en.wikipedia.org/wiki/CentOS\n\nTalk about \"hand waving and speculation\" - you are citing Wikipedia as a\nsource?!\n\n> For RHEL, look down to \"Release History\" and RHEL 5.3 based on \n> Linux-2.6.18, released March, 2007. On the CentOS page you'll see it is \n> dated April, 2007. CentOS is identical to RHEL on purpose, but always 1 \n> to 6 months after the RHEL, since they take the RHEL source, re-build \n> it, and then re-test it.\n\nMaybe that is the kernel version - but it isn't a vanilla kernel.\nComparing kernel versions between distros is a dodgy business as they\nall have their own patch sets and backports of patches.\n\n> Linux is up to Linux-2.6.31.1 right now:\n> http://www.kernel.org/\n\nAnd I very much doubt kernel version is a significant factor in\nperformance unless you hit one of the lemon versions.\n\n> Personally, I use Fedora, and my servers have been quite stable. One of \n> our main web servers running Fedora:\n> [mark@bambi]~% uptime\n> 09:45:41 up 236 days, 10:07, 1 user, load average: 0.02, 0.04, 0.08\n\ngourd-amber:~ # uptime\n 8:28am up 867 days 12:30, 1 user, load average: 0.24, 0.18, 0.10\n\n\n", "msg_date": "Mon, 05 Oct 2009 08:28:30 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "> Maybe - if the only thing the server is running is PostgreSQL. Show of \n> hands - how many users who ONLY install PostgreSQL, and use a bare \n> minimum OS install, choosing to not run any other software? Now, how \n> many people ALSO run things like PHP, and require software more \n> up-to-date than 3 years?\n\nMe.\n\nNot everyone is running LA?P stack applications.\n\n\n", "msg_date": "Mon, 05 Oct 2009 08:35:25 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe OT, not sure Re: Best suiting OS" }, { "msg_contents": "On Mon, Oct 5, 2009 at 2:00 AM, Craig James <[email protected]> wrote:\n> Fedora is a very nice project, but it's not suitable for production database\n> servers.\n\nThe trick is to write such a kick-ass application that before the\nFedora support window ends, the load has increased enough that it's\ntime to upgrade the hardware anyway.\n\nAlso, I'd just like to mention that vi is a much better editor than emacs.\n\n...Robert\n", "msg_date": "Mon, 5 Oct 2009 09:25:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Robert Haas wrote (in part):\n\n> Also, I'd just like to mention that vi is a much better editor than\n> emacs.\n> \nThat is not my impression. I have used vi from when it first came out (I\nused ed before that) until about 1998 when I first installed Linux on one of\nmy machines and started using emacs. I find that for some tasks involving\nglobal editing, that vi is a lot easier to use. But for most of the things I\ndo on a regular basis, if find emacs better. So, for me, it is not which is\nthe better editor, but which is the better editor for the task at hand.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 09:30:01 up 4 days, 18:29, 3 users, load average: 4.09, 4.07, 4.09\n", "msg_date": "Mon, 05 Oct 2009 09:37:10 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Mon, 2009-10-05 at 09:37 -0400, Jean-David Beyer wrote:\n> Robert Haas wrote (in part):\n> > Also, I'd just like to mention that vi is a much better editor than\n> > emacs.\n> That is not my impression. I have used vi from when it first came out (I\n> used ed before that) until about 1998 when I first installed Linux on one of\n> my machines and started using emacs. I find that for some tasks involving\n> global editing, that vi is a lot easier to use. But for most of the things I\n> do on a regular basis, if find emacs better. So, for me, it is not which is\n> the better editor, but which is the better editor for the task at hand.\n\nBoth vi and emacs are obsolete. Bow before the glory of gedit!\n\n", "msg_date": "Mon, 05 Oct 2009 09:38:50 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Hi Jean-David,\n\nOn Mon, 2009-10-05 at 15:37 +0200, Jean-David Beyer wrote:\n> Robert Haas wrote (in part):\n> \n> > Also, I'd just like to mention that vi is a much better editor than\n> > emacs.\n> > \n> That is not my impression. I have used vi from when it first came out (I\n> used ed before that) until about 1998 when I first installed Linux on one of\n> my machines and started using emacs. I find that for some tasks involving\n> global editing, that vi is a lot easier to use. But for most of the things I\n> do on a regular basis, if find emacs better. So, for me, it is not which is\n> the better editor, but which is the better editor for the task at hand.\n\nYou are probably absolutely right, but Robert only wanted to point out\nthat this conversation gets in the flame-war direction, in his subtle\nway of doing this...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Mon, 5 Oct 2009 15:46:39 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [OT] Best suiting OS" }, { "msg_contents": "Devrim GÜNDÜZ wrote:\n> On Mon, 2009-10-05 at 12:07 +0200, Jean-Michel Pouré wrote:\n>> Go for Debian:\n>> * It is a free community, very active.\n> \n> Well, we need to state that this is not a unique feature.\n> \n>> * It is guaranteed to be upgradable.\n> \n> Depends. I had lots of issues with upgrade process in the past -- but\n> yeah, it is much better than most distros.\n> \n>> * Very easy to administrate via apt-get.\n> \n> Right. apt is better than yum (in terms of speed).\n> \n>> Choose Debian SID or testing, which will provide the latest fixes.\n> \n> One thing that I don't like about Debian is their update policy.\n> \n> If upstream is releasing a security update, I'd like to be able to find\n> new packages as upstream announces updated sets. Yes, I'm talking about\n> PostgreSQL here.\n\nThis is exactly what Debian does for a while now(at least for PostgreSQL)..\nIe.: Debian Etch aka has 8.1.18 and Debian Lenny has 8.3.8...\n\n\nStefan\n\n", "msg_date": "Mon, 05 Oct 2009 16:55:15 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Stefan Kaltenbrunner wrote:\n> Devrim GÜNDÜZ wrote:\n>> On Mon, 2009-10-05 at 12:07 +0200, Jean-Michel Pouré wrote:\n>>> Go for Debian:\n>>> * It is a free community, very active.\n>>\n>> Well, we need to state that this is not a unique feature.\n>>\n>>> * It is guaranteed to be upgradable.\n>>\n>> Depends. I had lots of issues with upgrade process in the past -- but\n>> yeah, it is much better than most distros.\n>>\n>>> * Very easy to administrate via apt-get.\n>>\n>> Right. apt is better than yum (in terms of speed).\n>>\n>>> Choose Debian SID or testing, which will provide the latest fixes.\n>>\n>> One thing that I don't like about Debian is their update policy.\n>>\n>> If upstream is releasing a security update, I'd like to be able to find\n>> new packages as upstream announces updated sets. Yes, I'm talking about\n>> PostgreSQL here.\n> \n> This is exactly what Debian does for a while now(at least for PostgreSQL)..\n> Ie.: Debian Etch aka has 8.1.18 and Debian Lenny has 8.3.8...\n\n\"Debian Etch aka oldstable\" and Debian Lenny (the current release)...\n\n\nStefan\n\n", "msg_date": "Mon, 05 Oct 2009 16:58:48 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "\n\n\nOn 10/3/09 7:35 PM, \"Karl Denninger\" <[email protected]> wrote:\n\n> Denis Lussier wrote:\n>> I'm a BSD license fan, but, I don't know much about *BSD otherwise (except\n>> that many advocates say it runs PG very nicely).\n>> \n>> \n>> \n>> On the Linux side, unless your a dweeb, go with a newer, popular & well\n>> supported release for Production. IMHO, that's RHEL 5.x or CentOS 5.x. Of\n>> course the latest SLES & UBuntu schtuff are also fine.\n>> \n>> \n>> \n>> \n>> In other words, unless you've got a really good reason for it, stay away from\n>> Fedora & OpenSuse for production usage.\n>> \n>> \n>> \n>> On Thu, Oct 1, 2009 at 3:10 PM, <[email protected]> wrote:\n>> \n>>> On Thu, 1 Oct 2009, S Arvind wrote:\n>>> \n>>> \n>>>> Hi everyone,\n>>>> What is the best Linux flavor for server which runs postgres alone.\n>>>> The postgres must handle greater number of database around 200+.\n>>>> Performance\n>>>> on speed is the vital factor.\n>>>> Is it FreeBSD, CentOS, Fedora, Redhat xxx??\n>>>> \n>>> \n>>> as noted by others *BSD is not linux\n>>> \n>>> among the linux options, the best option is the one that you as a company\n>>> are most comfortable with (and have the support/upgrade processes in place\n>>> for)\n>>> \n>>> in general, the newer the kernel the better things will work, but it's far\n>>> better to have an 'old' system that your sysadmins understand well and can\n>>> support easily than a 'new' system that they don't know well and therefor\n>>> have trouble supporting.\n>>> \n>>> David Lang\n>>> \n>> \n>> \n>> \n> I am a particular fan of FreeBSD, and in some benchmarking I did between it\n> and CentOS FreeBSD 7.x literally wiped the floor with the CentOS release I\n> tried on IDENTICAL hardware.\n> I also like the 3ware raid coprocessors - they work well, are fast, and I've\n> had zero trouble with them.\n> \n> -- Karl\n> \n\nWith CentOS 5.x, I have to do quite a bit of tuning to get it to perform\nwell. I often get almost 2x the performance after tuning.\n\nFor I/O --\nDeadline scheduler + reasonably large block device read-ahead + XFS\nconfigured with large 'allocsize' settings (8MB to 80MB) make a huge\ndifference.\n\nFurthermore, the 3ware 35xx and 36xx (I think) I tried performed\nparticularly badly out of the box without tuning on CentOS.\n\nSo, Identical hardware or not, both have to be tuned well to really compare\nanyway.\n\nHowever, I have certainly seen some inefficiencies with Linux and large use\nof shared memory -- and I wouldn't be surprised if these problems don't\nexist on FreeBSD or OpenSolaris.\n\n\n", "msg_date": "Mon, 5 Oct 2009 09:30:56 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Scott Carey wrote:\n> On 10/3/09 7:35 PM, \"Karl Denninger\" <[email protected]> wrote:\n> \n> \n>> I am a particular fan of FreeBSD, and in some benchmarking I did between it\n>> and CentOS FreeBSD 7.x literally wiped the floor with the CentOS release I\n>> tried on IDENTICAL hardware.\n>> I also like the 3ware raid coprocessors - they work well, are fast, and I've\n>> had zero trouble with them.\n>>\n>> -- Karl\n>> \n>\n> With CentOS 5.x, I have to do quite a bit of tuning to get it to perform\n> well. I often get almost 2x the performance after tuning.\n>\n> For I/O --\n> Deadline scheduler + reasonably large block device read-ahead + XFS\n> configured with large 'allocsize' settings (8MB to 80MB) make a huge\n> difference.\n>\n> Furthermore, the 3ware 35xx and 36xx (I think) I tried performed\n> particularly badly out of the box without tuning on CentOS.\n>\n> So, Identical hardware or not, both have to be tuned well to really compare\n> anyway.\n>\n> However, I have certainly seen some inefficiencies with Linux and large use\n> of shared memory -- and I wouldn't be surprised if these problems don't\n> exist on FreeBSD or OpenSolaris.\n> \nI don't run the 3x series 3ware boards. If I recall correctly they're\nnot true coprocessor boards and rely on the host CPU. Those are always\ngoing to be a lose compared to a true coprocessor with dedicated cache\nmemory on the card.\n\nThe 9xxx series boards are, and are extremely fast (make sure you\ninstall the battery backup or run on a UPS, set the appropriate flags,\nand take your chances - writeback caching makes a HUGE difference.)\n\nOther than pinning shared memory on FreeBSD (and increasing a couple of\nboot-time tunables to permit large enough shared segments and semaphore\nlists) little is required to get excellent performance.\n\nThe LSI cards that DELL, Intel and a few others have used (these appear\nto be deprecated now as it looks like LSI bought 3ware) also work well\nbut their user interface is somewhat of a pain in the butt compared to\n3Ware's.\n\n-- Karl", "msg_date": "Mon, 05 Oct 2009 12:27:07 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "> However, I have certainly seen some inefficiencies with Linux and large use\n> of shared memory -- and I wouldn't be surprised if these problems don't\n> exist on FreeBSD or OpenSolaris.\n\nThis came on the freebsd-performance-list a few days ago.\n\nhttp://docs.freebsd.org/cgi/getmsg.cgi?fetch=13001+0+current/freebsd-performance\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Mon, 5 Oct 2009 19:29:23 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "\n\n\nOn 10/5/09 10:27 AM, \"Karl Denninger\" <[email protected]> wrote:\n\n> Scott Carey wrote:\n>> \n>> \n>> On 10/3/09 7:35 PM, \"Karl Denninger\" <[email protected]>\n>> <mailto:[email protected]> wrote:\n>> \n>> \n>> \n>>> \n>>> I am a particular fan of FreeBSD, and in some benchmarking I did between it\n>>> and CentOS FreeBSD 7.x literally wiped the floor with the CentOS release I\n>>> tried on IDENTICAL hardware.\n>>> I also like the 3ware raid coprocessors - they work well, are fast, and I've\n>>> had zero trouble with them.\n>>> \n>>> -- Karl\n>>> \n>>> \n>> \n>> \n>> With CentOS 5.x, I have to do quite a bit of tuning to get it to perform\n>> well. I often get almost 2x the performance after tuning.\n>> \n>> For I/O --\n>> Deadline scheduler + reasonably large block device read-ahead + XFS\n>> configured with large 'allocsize' settings (8MB to 80MB) make a huge\n>> difference.\n>> \n>> Furthermore, the 3ware 35xx and 36xx (I think) I tried performed\n>> particularly badly out of the box without tuning on CentOS.\n>> \n>> So, Identical hardware or not, both have to be tuned well to really compare\n>> anyway.\n>> \n>> However, I have certainly seen some inefficiencies with Linux and large use\n>> of shared memory -- and I wouldn't be surprised if these problems don't\n>> exist on FreeBSD or OpenSolaris.\n>> \n> I don't run the 3x series 3ware boards. If I recall correctly they're not\n> true coprocessor boards and rely on the host CPU. Those are always going to\n> be a lose compared to a true coprocessor with dedicated cache memory on the\n> card.\n\nI screwed up, it was the 95xx and 96xx that stink for me. (Adaptec 2x as\nfast, PERC 6 25% faster) with 1TB SATA drives.\n\nThought 96xx was a good chunk faster due to the faster interface.\n\n> \n> The 9xxx series boards are, and are extremely fast (make sure you install the\n> battery backup or run on a UPS, set the appropriate flags, and take your\n> chances - writeback caching makes a HUGE difference.)\n\nNot at all in my experience, 12 drives in raid 10, and 300MB/sec sequential\ntrasfer rate = crap. Heavily tweaked, 450MB/sec. (Adaptec 5805 =\n600MB/sec).\n\n> \n> Other than pinning shared memory on FreeBSD (and increasing a couple of\n> boot-time tunables to permit large enough shared segments and semaphore lists)\n> little is required to get excellent performance.\n> \n> The LSI cards that DELL, Intel and a few others have used (these appear to be\n> deprecated now as it looks like LSI bought 3ware) also work well but their\n> user interface is somewhat of a pain in the butt compared to 3Ware's.\n> \n> -- Karl \n> \n\n", "msg_date": "Mon, 5 Oct 2009 10:38:11 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Claus Guttesen wrote:\n>> However, I have certainly seen some inefficiencies with Linux and large use\n>> of shared memory -- and I wouldn't be surprised if these problems don't\n>> exist on FreeBSD or OpenSolaris.\n>> \n>\n> This came on the freebsd-performance-list a few days ago.\n>\n> http://docs.freebsd.org/cgi/getmsg.cgi?fetch=13001+0+current/freebsd-performance\n> \nGeezus - that's a BIG improvement.\n\nI have not yet benchmarked FreeBSD 8.x - my production systems are all\non FreeBSD 7.x at present. The improvement going there from 6.x was\nMASSIVE. 8.x is on my plate to start playing with in the next couple of\nmonths.\n\n8.x, I will note, is NOT YET RELEASED, and you're playing with fire to\nrun it in a production environment at the present time. I've been a\nstrong proponent (and user) of FreeBSD for years, going back to when I\nran my ISP on it. It has had its problems from time to time as do all\noperating systems, but when 8.X is released and is stable it will\ndefinitely be worth moving to - IF its stable.\n\nI have systems with two years of uptime on them running FreeBSD 6.x in\nproduction use, and haven't had an actual OS crash on a production\nFreeBSD machine in a very long time.\n\nOne thing FreeBSD has focused more and more on is SMP efficiency and\neffective utilization of all the cores in the system. I have several\nsystems running 8-way SMP (Quad Xeons) and a couple running the\nCoreQuadExtreme (4 physical cores w/2 threads each via HT) and get\nexcellent performance out of all of them. The key is to make sure your\nI/O subsystem is up to the job and split storage across spindles and\ncontrollers as necessary so you don't run into bottlenecks there.\n\n-- Karl", "msg_date": "Mon, 05 Oct 2009 12:42:51 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "\nAm 05.10.2009 um 19:42 schrieb Karl Denninger:\n\n> I have not yet benchmarked FreeBSD 8.x - my production systems are \n> all on FreeBSD 7.x at present. The improvement going there from 6.x \n> was MASSIVE. 8.x is on my plate to start playing with in the next \n> couple of months.\nDid you ever try gjournal or zfs as tablespace?\n\nAxel\n---\[email protected] PGP-Key:29E99DD6 +49 151 2300 9283 computing @ \nchaos claudius\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 5 Oct 2009 19:56:35 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Axel Rau wrote:\n> Am 05.10.2009 um 19:42 schrieb Karl Denninger:\n>\n>> I have not yet benchmarked FreeBSD 8.x - my production systems are\n>> all on FreeBSD 7.x at present. The improvement going there from 6.x\n>> was MASSIVE. 8.x is on my plate to start playing with in the next\n>> couple of months.\n> Did you ever try gjournal or zfs as tablespace?\n>\ngjournal, no. ZFS has potential stability issues - I am VERY interested\nin it when those are resolved. It looks good on a test platform but I'm\nunwilling to run it in production; there are both reports of crashes and\nI have been able to crash it under some (admittedly rather extreme)\nsynthetic loads.\n\n-- Karl", "msg_date": "Mon, 05 Oct 2009 13:06:18 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Scott Carey wrote:\n>\n> On 10/5/09 10:27 AM, \"Karl Denninger\" <[email protected]> wrote:\n>\n> \n>> I don't run the 3x series 3ware boards. If I recall correctly they're not\n>> true coprocessor boards and rely on the host CPU. Those are always going to\n>> be a lose compared to a true coprocessor with dedicated cache memory on the\n>> card.\n>> \n> I screwed up, it was the 95xx and 96xx that stink for me. (Adaptec 2x as\n> fast, PERC 6 25% faster) with 1TB SATA drives.\n>\n> Thought 96xx was a good chunk faster due to the faster interface.\n> \nI'm running the 9650s in most of my \"busier\" machines. Haven't tried a\nPERC card yet - its on my list. Most of my stuff is configured as RAID\n1 although I have a couple of RAID 10 arrays in service; depending on\nthe data set and how it splits up I prefer to have more control of how\nI/O is partitioned rather than let the controller pick through striping.\n\nI don't think I have any of the 95xx stuff out in the wild at present;\nit didn't do particularly well in my testing in terms of performance.\n\n-- Karl", "msg_date": "Mon, 05 Oct 2009 13:15:44 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Sun, 4 Oct 2009, Mark Mielke wrote:\n\n> I can show you tickets where RedHat has specifically state they *will \n> not* update the kernel to better support new hardware, for fear of \n> breaking support for older hardware.\n\nThere are two reasonable paths you'll find in the Open Source world, which \nmirror the larger industry at large:\n\n1) Branch a stable release rarely. Sit on it for a while without changing \nanything before release. Backport only critical stuff (important bug \nfixes) into the stable version once it's out there. Support that version \nfor years. Examples of this model include RedHat and PostgreSQL, albeit \nwith the latter having a much more regular release schedule than most \n\"long-term release\" pieces of software.\n\n2) Branch a stable release often. Push it out the door with fairly recent \ncomponents. Backport little, because a new release is coming out the door \nsoon enough anyway. It's impossible for this model to backport as much as \n(1), because they have so many more releases to handle, and there's no \npressure to do so because \"upgrade to the latest release to fix\" is \nusually an options. Examples of this model include the Linux kernel \nproper and Ubuntu.\n\nMy personal belief is that (2) never leads to stable software, and that \nfor the complexity level of the projects I follow you're lucky you can get \na stable version of one piece of software if you focus on it about every \nyear. Once every two years would be better, because as you correctly note \nit takes about that long for many hardware drivers to go from cutting-edge \nto old, and that would give less disruption to admins.\n\nThat is unfortunately both more aggressive than the \"long-term release\" \nstable versions provided by RedHat and less than the hyper-aggressive \nschedules you'll find in Ubuntu and Fedora. It does happen to be very \nclose to the PostgreSQL stable release frequency though:\n\n8.0 2005-01-19\n8.1 2005-11-08\n8.2 2006-12-05\n8.3 2008-02-04\n8.4 2009-07-01\n\nRedHat does a commendable job of backporting way more stuff than anybody \nelse I'm aware of. The SATA issues you mention are actually a worst-case \nfor their development model. The big SATA switch-over with \"Parallel PATA \nmerge\" happened in 2.6.19. My recollection is that this was such a mess \nat first is basically forced RedHat to release RHEL5 with 2.6.18, as there \nwasn't expected to be a stable ATA stack from the resulting chaos for a \nfew releases they could use; anecdotally, I didn't find Linux \nre-stabilized until between 2.6.20 and 2.6.22, depending on your hardware.\n\nI contrast this with Ubuntu, which I can't accept as a server because \nnothing I run into *ever* gets backported. I know they backport \nsomething, because I see the changelogs, but never what I run into. I \nencounter a bug or two in ever new Ubuntu release that makes life \ndifficult for me, and in every case so far the \"resolution\" was \"fixed in \n<next letter>\". In two of those cases I recall I saw the same bug fix \n(from an upstream package) was backported into RHEL.\n\n> All 7 of the machines I installed RHEL 5.3 on *failed* to detect the \n> SATA controller, and the install process completed at 2 Mbyte/s. After \n> the machines were up, I discovered the issue is a known issue, and that \n> RedHat would not patch the problem, but instead suggested a change to \n> grub.conf. Is this stable?\n\nWith all due respect, this was operator error on your part. Buying the \nhardware and then guessing that everything will work out fine with the OS \ninstall isn't ever a path to stable either. I (and every other person who \ndeals regularly with RHEL on increasingly new hardware) could have told \nyou this was going to be a disaster, that you don't try to provision a \nserver using native SATA with unknown compatibility on that OS. I don't \nhave this problem (for the database servers at work at least--suffered \nthrough it plenty with random white boxes). I buy from a vendor who \nfigures this out and old sells me stuff that works on RHEL. You have a \nlarger process problem you can't blame on the software.\n\n> They finally back-ported FUSE - but did you know their 2.6.18 kernel has \n> something like 3000 patches that they maintain against it? Does this not \n> sound insane? How do you provide effective support for a kernel that has \n> 3000 back ported patches against it?\n\nHow exactly is this any different from \"effective support\" for the kernel \nat large, which integrates way more patches than that between releases? \nI see RedHat as having a much smaller set of patches to manage, which is \none reason their releases are more stable than \"pick a random kernel \nrelease\".\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 5 Oct 2009 14:20:07 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "\nOn 10/5/09 11:15 AM, \"Karl Denninger\" <[email protected]> wrote:\n\n> Scott Carey wrote:\n>> \n>> On 10/5/09 10:27 AM, \"Karl Denninger\" <[email protected]> wrote:\n>> \n>> \n>>> I don't run the 3x series 3ware boards. If I recall correctly they're not\n>>> true coprocessor boards and rely on the host CPU. Those are always going to\n>>> be a lose compared to a true coprocessor with dedicated cache memory on the\n>>> card.\n>>> \n>> I screwed up, it was the 95xx and 96xx that stink for me. (Adaptec 2x as\n>> fast, PERC 6 25% faster) with 1TB SATA drives.\n>> \n>> Thought 96xx was a good chunk faster due to the faster interface.\n>> \n> I'm running the 9650s in most of my \"busier\" machines. Haven't tried a\n> PERC card yet - its on my list. Most of my stuff is configured as RAID\n> 1 although I have a couple of RAID 10 arrays in service; depending on\n> the data set and how it splits up I prefer to have more control of how\n> I/O is partitioned rather than let the controller pick through striping.\n> \n> I don't think I have any of the 95xx stuff out in the wild at present;\n> it didn't do particularly well in my testing in terms of performance.\n> \n> -- Karl\n> \nLet me make sure I clarify here --\n\nThe 3ware 9[56]xx issues I have seen were with throughput on larger RAID\narray sizes -- 8+ disks total. On smaller arrays, I have not tested.\n\n\n", "msg_date": "Mon, 5 Oct 2009 11:55:24 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Scott Carey wrote:\n> On 10/5/09 11:15 AM, \"Karl Denninger\" <[email protected]> wrote:\n>\n> \n>> I'm running the 9650s in most of my \"busier\" machines. Haven't tried a\n>> PERC card yet - its on my list. Most of my stuff is configured as RAID\n>> 1 although I have a couple of RAID 10 arrays in service; depending on\n>> the data set and how it splits up I prefer to have more control of how\n>> I/O is partitioned rather than let the controller pick through striping.\n>>\n>> I don't think I have any of the 95xx stuff out in the wild at present;\n>> it didn't do particularly well in my testing in terms of performance.\n>>\n>> -- Karl\n>> \n> Let me make sure I clarify here --\n>\n> The 3ware 9[56]xx issues I have seen were with throughput on larger RAID\n> array sizes -- 8+ disks total. On smaller arrays, I have not tested.\n>\n> \nInteresting... I'm curious if that's why I haven't run into it - I get\ndamn close to N x rotational on sequential I/O out of these boards; you\ncan't really do better than the physics allow :)\n\nI'll have to play with some larger (> 8 unit) Raid 1 and Raid 10 arrays\nand compare to see if there's a \"knee\" point and whether its a function\nof the aggregation through the chipset or whether it's a card issue. I\nsuspect it's related to the aggregation as otherwise I'd have seen it on\nsome of my larger configurations, but I tend to run multiple adapters\nfor anything more than 8 spindles, which precludes the situation you've\nseen.\n\nOf course if you NEED 12 spindles in one logical device for capacity\nreasons........\n\n-- Karl", "msg_date": "Mon, 05 Oct 2009 14:39:54 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Claus Guttesen <[email protected]> wrote:\n \n>\nhttp://docs.freebsd.org/cgi/getmsg.cgi?fetch=13001+0+current/freebsd-performance\n \nNot being particularly passionate about any OS, I've been intrigued by\nthe FreeBSD benchmarks. However, management is reluctant to use boxes\nwhich don't have heavily-advertised decals on the front. At the\nmoment they're going with IBM X-series boxes, and FreeBSD isn't\nsupported, so we'd be on our own. Has anyone had any experience with\nthis combination? (In particular, our biggest machines are x3850 M2\nboxes.)\n \nOh, and of course I dispute the supremacy of vim as an editor -- why\nuse that when you've got \"ed\"? ;-)\n \n-Kevin\n", "msg_date": "Mon, 05 Oct 2009 15:26:01 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "\nAm 05.10.2009 um 20:06 schrieb Karl Denninger:\n\n> gjournal, no. ZFS has potential stability issues - I am VERY \n> interested\n> in it when those are resolved. It looks good on a test platform but \n> I'm\n> unwilling to run it in production; there are both reports of crashes \n> and\n> I have been able to crash it under some (admittedly rather extreme)\n> synthetic loads.\nHow do you prevent from long running fsck with TB size ufs partitions?\nI had some hope for zfs13 and fbsd 8.0.\n\nAxel\n---\[email protected] PGP-Key:29E99DD6 +49 151 2300 9283 computing @ \nchaos claudius\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 5 Oct 2009 22:27:46 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "> Claus Guttesen <[email protected]> wrote:\n>\n> http://docs.freebsd.org/cgi/getmsg.cgi?fetch=13001+0+current/freebsd-performance\n>\n> Not being particularly passionate about any OS, I've been intrigued by\n> the FreeBSD benchmarks.  However, management is reluctant to use boxes\n> which don't have heavily-advertised decals on the front.  At the\n> moment they're going with IBM X-series boxes, and FreeBSD isn't\n> supported, so we'd be on our own.  Has anyone had any experience with\n> this combination?  (In particular, our biggest machines are x3850 M2\n> boxes.)\n\nYou can download a live-cd and see if it recognizes disk-controller,\nnic etc. on HP bce and bge, em GB nics works fine.\n\n> Oh, and of course I dispute the supremacy of vim as an editor -- why\n> use that when you've got \"ed\"?  ;-)\n\nI have tried edlin on dos 3 or something like that. But don't recall\nthe commands! :-)\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Mon, 5 Oct 2009 22:51:50 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Axel Rau wrote:\n>\n> Am 05.10.2009 um 20:06 schrieb Karl Denninger:\n>\n>> gjournal, no. ZFS has potential stability issues - I am VERY interested\n>> in it when those are resolved. It looks good on a test platform but I'm\n>> unwilling to run it in production; there are both reports of crashes and\n>> I have been able to crash it under some (admittedly rather extreme)\n>> synthetic loads.\n> How do you prevent from long running fsck with TB size ufs partitions?\n> I had some hope for zfs13 and fbsd 8.0.\n>\n> Axel\nTurn on softupdates. Fsck is deferred and the system comes up almost\ninstantly even with TB-sized partitions; the fsck then cleans up the cruft.\n\n-- Karl", "msg_date": "Mon, 05 Oct 2009 16:44:18 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Mon, Oct 5, 2009 at 6:35 AM, Adam Tauno Williams\n<[email protected]> wrote:\n>> Maybe - if the only thing the server is running is PostgreSQL. Show of\n>> hands - how many users who ONLY install PostgreSQL, and use a bare\n>> minimum OS install, choosing to not run any other software? Now, how\n>> many people ALSO run things like PHP, and require software more\n>> up-to-date than 3 years?\n>\n> Me.\n>\n> Not everyone is running LA?P stack applications.\n\nMe too, even though we are running LAPP stack. Not all LAPP stacks\nare small intranet servers with a few hundred users. We service 1.5\nMillion users running 10k to 20k page views a minute. On a server\nfarm that fills 2/3 of a cabinet.\n", "msg_date": "Mon, 5 Oct 2009 16:05:44 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe OT, not sure Re: Best suiting OS" }, { "msg_contents": "\nAm 05.10.2009 um 23:44 schrieb Karl Denninger:\n\n> Axel Rau wrote:\n>>\n>> Am 05.10.2009 um 20:06 schrieb Karl Denninger:\n>>\n>>> gjournal, no. ZFS has potential stability issues - I am VERY \n>>> interested\n>>> in it when those are resolved. It looks good on a test platform \n>>> but I'm\n>>> unwilling to run it in production; there are both reports of \n>>> crashes and\n>>> I have been able to crash it under some (admittedly rather extreme)\n>>> synthetic loads.\n>> How do you prevent from long running fsck with TB size ufs \n>> partitions?\n>> I had some hope for zfs13 and fbsd 8.0.\n>>\n>> Axel\n> Turn on softupdates. Fsck is deferred and the system comes up almost\n> instantly even with TB-sized partitions; the fsck then cleans up the \n> cruft.\nLast time, I checked, there was a issue with background-fsck.\nI will give it a chance with my new 8.0 box.\nDo you have any experience with SSDs w/o BBUed Raidcontroller?\nAre they fast enough to ensure flash write out of drive cache at power \nfailure after fsync ack?\n\nAxel\n---\[email protected] PGP-Key:29E99DD6 +49 151 2300 9283 computing @ \nchaos claudius\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 6 Oct 2009 12:26:53 +0200", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Axel Rau wrote:\n>\n> Am 05.10.2009 um 23:44 schrieb Karl Denninger:\n>\n>> Turn on softupdates. Fsck is deferred and the system comes up almost\n>> instantly even with TB-sized partitions; the fsck then cleans up the\n>> cruft.\n> Last time, I checked, there was a issue with background-fsck.\n> I will give it a chance with my new 8.0 box.\n> Do you have any experience with SSDs w/o BBUed Raidcontroller?\n> Are they fast enough to ensure flash write out of drive cache at power\n> failure after fsync ack?\n>\n> Axel\n> ---\n> [email protected] PGP-Key:29E99DD6 +49 151 2300 9283 computing @\n> chaos claudius\nIMHO use the right tools for the job. In a DBMS environment where data\nintegrity is \"the deal\" this means a BBU'd RAID adapter.\n\nSSDs have their own set of issues, at least at present..... For data\nthat is read-only (or nearly-so) and of size where it can fit on a SSD\nthey can provide VERY significant performance benefits, in that there is\nno seek or latency delay. However, any write-significant application is\nIMHO still better-suited to rotating media at present. This will change\nI'm sure, but it is what it is as of this point in time.\n\nI have yet to run into a problem with background-fsck on a\nsoftupdate-set filesystem. In theory there is a potential issue with\ndrives that make their own decision on write-reordering; in practice on\na DBMS system you run with a BBU'd RAID controller and as such the\ncontroller and system UPS should prevent this from being an issue.\n\nOne of the potential issues that needs to be kept in mind with any\ncritical application is that disks that have \"intelligence\" may choose\nto re-order writes. This can bring trouble (data corruption) in any\napplication where a drive claims to have committed a block to stable\nstorage where in fact it only has it in its buffer RAM and has not\nwritten it to a platter yet. The only reasonable solution to this\nproblem is to run backed-up power so as to mitigate the risk of power\ndisappearing at an inopportune time. Backed-up power brings other\nadvantages as well (as a quality UPS usually comes with significant\nfiltering and power conditioning) which refuses the up front risk of\nfailures and is thus IMHO mandatory for any system that carries data you\ncare about.\n\n-- Karl", "msg_date": "Tue, 06 Oct 2009 08:39:11 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Tue, 6 Oct 2009, Karl Denninger wrote:\n\n> Axel Rau wrote:\n>>\n>> Am 05.10.2009 um 23:44 schrieb Karl Denninger:\n>>\n>>> Turn on softupdates. Fsck is deferred and the system comes up almost\n>>> instantly even with TB-sized partitions; the fsck then cleans up the\n>>> cruft.\n>> Last time, I checked, there was a issue with background-fsck.\n>> I will give it a chance with my new 8.0 box.\n>> Do you have any experience with SSDs w/o BBUed Raidcontroller?\n>> Are they fast enough to ensure flash write out of drive cache at power\n>> failure after fsync ack?\n>>\n>> Axel\n>> ---\n>> [email protected] PGP-Key:29E99DD6 +49 151 2300 9283 computing @\n>> chaos claudius\n> IMHO use the right tools for the job. In a DBMS environment where data\n> integrity is \"the deal\" this means a BBU'd RAID adapter.\n>\n> SSDs have their own set of issues, at least at present..... For data\n> that is read-only (or nearly-so) and of size where it can fit on a SSD\n> they can provide VERY significant performance benefits, in that there is\n> no seek or latency delay. However, any write-significant application is\n> IMHO still better-suited to rotating media at present. This will change\n> I'm sure, but it is what it is as of this point in time.\n\nthis depends on what SSD you use. for most of them you are correct, but \nthere are some that have very good write performance.\n\nDavid Lang\n\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000", "msg_date": "Tue, 6 Oct 2009 11:15:40 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": ">\n> If you run Redhat, I would advise the most recent; i.e., Red Hat Enterprise\n> Linux 5, since they do not add any new features and only correct errors.\n> CentOS is the same as Red Hat, but you probably get better support from Red\n> Hat if you need it -- though you pay for it.\n>\n\n The other thing to take into consideration is the number of vendors who\nrelease drivers for linux distros. Typically, RHEL and SLES get the\nquickest release of drivers for things like network cards, storage cards,\netc...\n\n--Scott\n\n\n\nIf you run Redhat, I would advise the most recent; i.e., Red Hat Enterprise\nLinux 5, since they do not add any new features and only correct errors.\nCentOS is the same as Red Hat, but you probably get better support from Red\nHat if you need it -- though you pay for it.  The other thing to take into consideration is the number of vendors who release drivers for linux distros.  Typically, RHEL and SLES get the quickest release of drivers for things like network cards, storage cards, etc...\n--Scott", "msg_date": "Tue, 6 Oct 2009 16:20:26 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "On Mon, 5 Oct 2009, Jean-Michel Pouré wrote:\n> Go for Debian:\n> * It is a free community, very active.\n> * It is guaranteed to be upgradable.\n> * Very easy to administrate via apt-get.\n\nhttp://www.debian.org/News/2009/20091007\n\nIf you like Debian, but want to use FreeBSD, now you can have both.\n\n> Choose Debian SID or testing, which will provide the latest fixes.\n\nI disagree. If you want a stable server, choose Debian stable, which was \nlast released in February. It gets all the relevant fixes, just like any \nother supported stable distribution - just no new major versions of \nsoftware. The next stable release is scheduled for the new year.\n\nIf you want the latest and greatest, then you can use Debian testing.\n\nMatthew\n\n-- \n The surest protection against temptation is cowardice.\n -- Mark Twain", "msg_date": "Thu, 8 Oct 2009 14:40:53 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Le jeudi 08 octobre 2009 15:40:53, Matthew Wakeling a écrit :\n> On Mon, 5 Oct 2009, Jean-Michel Pouré wrote:\n> > Go for Debian:\n> > * It is a free community, very active.\n> > * It is guaranteed to be upgradable.\n> > * Very easy to administrate via apt-get.\n> \n> http://www.debian.org/News/2009/20091007\n> \n> If you like Debian, but want to use FreeBSD, now you can have both.\n> \n> > Choose Debian SID or testing, which will provide the latest fixes.\n> \n> I disagree. If you want a stable server, choose Debian stable, which was\n> last released in February. It gets all the relevant fixes, just like any\n> other supported stable distribution - just no new major versions of\n> software. The next stable release is scheduled for the new year.\n> \n> If you want the latest and greatest, then you can use Debian testing.\n\ntesting and sid are usually the same with a 15 days delay.\n\nI strongly suggets to have a debian lenny and to backport newer packages if \nreally required (like postgres 8.4). Debian come with good tools to achieve \nthat (and there is debian-backport repository, sure)\n\n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Mon, 12 Oct 2009 16:18:18 +0200", "msg_from": "=?utf-8?q?C=C3=A9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" }, { "msg_contents": "Cédric Villemain <[email protected]> writes:\n>> If you want the latest and greatest, then you can use Debian testing.\n>\n> testing and sid are usually the same with a 15 days delay.\n\nAnd receive no out-of-band security updates, so you keep the holes for\n3 days when lucky, and 10 to 15 days otherwise, when choosing\ntesting. So consider stable first, and if you like to be in danger every\ntime you dist-upgrade while *having* to do it each and every day, sid is\nfor your production servers.\n\n> I strongly suggets to have a debian lenny and to backport newer packages if \n> really required (like postgres 8.4). Debian come with good tools to achieve \n> that (and there is debian-backport repository, sure)\n\nstable + backports + volatile (when it makes sense) is a perfect choice :)\n-- \ndim\n", "msg_date": "Mon, 12 Oct 2009 17:26:44 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best suiting OS" } ]
[ { "msg_contents": "Hello all,\n\nI'm looking for your general thoughts on CPU brand and HP disk controllers for a PostgreSQL server running Linux. The workload is all over the place sometimes OLTP, sometimes huge/long report transactions, sometimes tons of inserts and warehouse so I'm looking for overall good performance but not necessarily tuned to a specific task. I'm basically looking at something in the ProLiant DL380 series which boils down to Intel Xeon 5500 or AMD Opteron 2600. Are there any notable performance concerns regarding Postgres on either of these cpus?\n\nRAM will likely be in the 16GB range. Any comments on bus speeds or other issues related to the RAM?\n\nWhat is the opinion on HP disk controllers? The standard controller on this server line is the Smart Array P400 (512MB BB cache) although the option is available to go up to P600 or P800. I plan to need about 500GB (8 146GB disks, raid 10). Are the HP controllers worth my time or should I be looking elsewhere?\n\nFinally, I'm thinking 10k RPM SAS drives are appropriate. Does the substantial price increase for 15k RPM drives really show in the overall performance of the storage array?\n\nThanks for your insights.\n\n\n-- \nBenjamin Minshall <[email protected]>\n\n", "msg_date": "Thu, 01 Oct 2009 12:13:38 -0400", "msg_from": "Benjamin Minshall <[email protected]>", "msg_from_op": true, "msg_subject": "AMD, Intel and RAID controllers" }, { "msg_contents": "On Thu, 1 Oct 2009, Benjamin Minshall wrote:\n\n> I'm basically looking at something in the ProLiant DL380 series which \n> boils down to Intel Xeon 5500 or AMD Opteron 2600. Are there any \n> notable performance concerns regarding Postgres on either of these cpus?\n\nThe Xeon 5500 series are very impressive performers. The only reason I \ncould think of for why someone might want the older AMD design is if it \nsaved enouch money to buy more disks for an app limited by those.\n\n> RAM will likely be in the 16GB range. Any comments on bus speeds or other \n> issues related to the RAM?\n\nThe Xeon 5500 models I've tested didn't seem to vary all that much based \non the memory speed itself. It is important to pay attention to where the \nmajor breaks in bus speed on the processor are though, because those bumps \nreally mean something.\n\n> What is the opinion on HP disk controllers? The standard controller on this \n> server line is the Smart Array P400 (512MB BB cache) although the option is \n> available to go up to P600 or P800. I plan to need about 500GB (8 146GB \n> disks, raid 10). Are the HP controllers worth my time or should I be looking \n> elsewhere?\n\nThose are reasonable controllers, and the list archives here are filled \nwith a bias toward the P800. \nhttp://www.nabble.com/Experience-with-HP-Smart-Array-P400-and-SATA-drives--td20788664.html \nis a good sample, there are more. The important thing to realize is that \nRAID5 performance on the card is going to be awful no matter what you do, \nsince you're using RAID10 you should be fine.\n\n> Finally, I'm thinking 10k RPM SAS drives are appropriate. Does the \n> substantial price increase for 15k RPM drives really show in the overall \n> performance of the storage array?\n\nIf your app is limited by disk seeking and general latency, those can make \nsense. Ideally, you'd get enough RAM for caching that you're not hitting \nthe disks hard enough for the difference between them to matter so much.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 1 Oct 2009 14:43:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD, Intel and RAID controllers" }, { "msg_contents": "On Thu, Oct 1, 2009 at 9:43 PM, Benjamin Minshall\n<[email protected]>wrote:\n\n> Hello all,\n>\n> I'm looking for your general thoughts on CPU brand and HP disk controllers\n> for a PostgreSQL server running Linux. The workload is all over the place\n> sometimes OLTP, sometimes huge/long report transactions, sometimes tons of\n> inserts and warehouse so I'm looking for overall good performance but not\n> necessarily tuned to a specific task. I'm basically looking at something in\n> the ProLiant DL380 series which boils down to Intel Xeon 5500 or AMD Opteron\n> 2600. Are there any notable performance concerns regarding Postgres on\n> either of these cpus?\n>\n\nWe have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the terms of\nprocessors we found that buying amd makes much more sense because in the\nsame price we could put more processors\non the machine and utilize the multiple cores effectively with PG\n\n\n> RAM will likely be in the 16GB range. Any comments on bus speeds or other\n> issues related to the RAM?\n>\n\n> What is the opinion on HP disk controllers? The standard controller on\n> this server line is the Smart Array P400 (512MB BB cache) although the\n> option is available to go up to P600 or P800. I plan to need about 500GB (8\n> 146GB disks, raid 10). Are the HP controllers worth my time or should I be\n> looking elsewhere?\n>\n\nIn my experience P400 is good enough that is if you don't plan to go for\nseparate storage boxes.\n\n\n>\n> Finally, I'm thinking 10k RPM SAS drives are appropriate. Does the\n> substantial price increase for 15k RPM drives really show in the overall\n> performance of the storage array?\n>\n\nNot really\n\n\n>\n> Thanks for your insights.\n>\n>\n> --\n> Benjamin Minshall <[email protected]>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nWith Regards\nAlpesh Gajbe\n\nwww.gnowledge.org\n\nOn Thu, Oct 1, 2009 at 9:43 PM, Benjamin Minshall <[email protected]> wrote:\nHello all,\n\nI'm looking for your general thoughts on CPU brand and HP disk controllers for a PostgreSQL server running Linux.  The workload is all over the place sometimes OLTP, sometimes huge/long report transactions, sometimes tons of inserts and warehouse so I'm looking for overall good performance but not necessarily tuned to a specific task.  I'm basically looking at something in the ProLiant DL380 series which boils down to Intel Xeon 5500 or AMD Opteron 2600.  Are there any notable performance concerns regarding Postgres on either of these cpus?\nWe have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the terms of processors we found that buying amd makes much more sense because in the same price we could put more processors  on the machine and utilize the multiple cores effectively with PG \n \nRAM will likely be in the 16GB range.  Any comments on bus speeds or other issues related to the RAM?\n\nWhat is the opinion on HP disk controllers?  The standard controller on this server line is the Smart Array P400 (512MB BB cache) although the option is available to go up to P600 or P800.  I plan to need about 500GB (8 146GB disks, raid 10).  Are the HP controllers worth my time or should I be looking elsewhere?\n In my experience P400 is  good enough that is if you don't plan to go for separate storage boxes.   \n\nFinally, I'm thinking 10k RPM SAS drives are appropriate.  Does the substantial price increase for 15k RPM drives really show in the overall performance of the storage array?Not really  \n\n\nThanks for your insights.\n\n\n-- \nBenjamin Minshall <[email protected]>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- With RegardsAlpesh Gajbewww.gnowledge.org", "msg_date": "Fri, 2 Oct 2009 12:06:16 +0530", "msg_from": "Alpesh Gajbe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD, Intel and RAID controllers" }, { "msg_contents": "On Thu, Oct 1, 2009 at 9:43 PM, Benjamin Minshall\n<[email protected]>wrote:\n\n> Hello all,\n>\n> I'm looking for your general thoughts on CPU brand and HP disk controllers\n> for a PostgreSQL server running Linux. The workload is all over the place\n> sometimes OLTP, sometimes huge/long report transactions, sometimes tons of\n> inserts and warehouse so I'm looking for overall good performance but not\n> necessarily tuned to a specific task. I'm basically looking at something in\n> the ProLiant DL380 series which boils down to Intel Xeon 5500 or AMD Opteron\n> 2600. Are there any notable performance concerns regarding Postgres on\n> either of these cpus?\n>\n\nWe have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the terms of\nprocessors we found that buying amd makes much more sense because in the\nsame price we could put more processors\non the machine and utilize the multiple cores effectively with PG\n\n\n>\n> RAM will likely be in the 16GB range. Any comments on bus speeds or other\n> issues related to the RAM?\n>\n> What is the opinion on HP disk controllers? The standard controller on\n> this server line is the Smart Array P400 (512MB BB cache) although the\n> option is available to go up to P600 or P800. I plan to need about 500GB (8\n> 146GB disks, raid 10). Are the HP controllers worth my time or should I be\n> looking elsewhere?\n>\n\nIn my experience P400 is good enough that is if you don't plan to go for\nseparate storage boxes.\n\n\n> Finally, I'm thinking 10k RPM SAS drives are appropriate. Does the\n> substantial price increase for 15k RPM drives really show in the overall\n> performance of the storage array?\n>\n\nNot really\n\n>\n> Thanks for your insights.\n>\n>\n> --\n> Benjamin Minshall <[email protected]>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nWith Regards\nAlpesh Gajbe\n\nOn Thu, Oct 1, 2009 at 9:43 PM, Benjamin Minshall <[email protected]> wrote:\nHello all,\n\nI'm looking for your general thoughts on CPU brand and HP disk controllers for a PostgreSQL server running Linux.  The workload is all over the place sometimes OLTP, sometimes huge/long report transactions, sometimes tons of inserts and warehouse so I'm looking for overall good performance but not necessarily tuned to a specific task.  I'm basically looking at something in the ProLiant DL380 series which boils down to Intel Xeon 5500 or AMD Opteron 2600.  Are there any notable performance concerns regarding Postgres on either of these cpus?\nWe have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the\nterms of processors we found that buying amd makes much more sense\nbecause in the same price we could put more processors  on the machine and utilize the multiple cores effectively with PG \n\nRAM will likely be in the 16GB range.  Any comments on bus speeds or other issues related to the RAM?\n\nWhat is the opinion on HP disk controllers?  The standard controller on this server line is the Smart Array P400 (512MB BB cache) although the option is available to go up to P600 or P800.  I plan to need about 500GB (8 146GB disks, raid 10).  Are the HP controllers worth my time or should I be looking elsewhere?\n In my experience P400 is  good enough that is if you don't plan to go for separate storage boxes. \n\nFinally, I'm thinking 10k RPM SAS drives are appropriate.  Does the substantial price increase for 15k RPM drives really show in the overall performance of the storage array?Not really \n\n\nThanks for your insights.\n\n\n-- \nBenjamin Minshall <[email protected]>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- With RegardsAlpesh Gajbe", "msg_date": "Fri, 2 Oct 2009 12:17:01 +0530", "msg_from": "alpesh gajbe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD, Intel and RAID controllers" }, { "msg_contents": "On Fri, Oct 2, 2009 at 12:47 AM, alpesh gajbe <[email protected]> wrote:\n\n> We have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the terms of\n> processors we found that buying amd makes much more sense because in the\n> same price we could put more processors\n> on the machine and utilize the multiple cores effectively with PG\n\nI just bought a machine with 12 2.2GHz AMD cores for less than it\nwould have cost me for 8 2.26GHz Nehelem cores, so yeah, I've found\nthe same thing. And as you go up the price difference keeps getting\nlarger.\n", "msg_date": "Fri, 2 Oct 2009 06:50:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD, Intel and RAID controllers" }, { "msg_contents": "On Fri, 2 Oct 2009, Scott Marlowe wrote:\n\n> On Fri, Oct 2, 2009 at 12:47 AM, alpesh gajbe <[email protected]> wrote:\n>\n>> We have a Proliant DL585 G5 with 16 cores and 32 GB Ram in the terms of\n>> processors we found that buying amd makes much more sense because in the\n>> same price we could put more processors\n>> on the machine and utilize the multiple cores effectively with PG\n>\n> I just bought a machine with 12 2.2GHz AMD cores for less than it\n> would have cost me for 8 2.26GHz Nehelem cores, so yeah, I've found\n> the same thing. And as you go up the price difference keeps getting\n> larger.\n\nHuh, apparently I never sent this response...the whole Intel/AMD \ncomparison at this point really depends on how fast you need any \nindividual core to be. The Intel i7 systems I was suggesting I like are \nexpensive, but they are the fastest cores around right now by a good \nmargin too. If your demands are for lots of cores and you don't care how \nmuch any one of them executes, then sure the AMD systems will save you \nquite a bit of cash as suggested above.\n\nBut it's not difficult to run into situations with a PostgreSQL server \nwhere you're bottlenecked waiting for something that can only run on one \ncore at a time. Big reports and COPY are common examples. If that's your \nsituation, there's no substitute for making the individual cores as fast \nas feasible, and there the price premium Intel charges can easily be \nworthwhile.\n\nAnd as I already suggested a while ago, if you're disk bound, you \nshouldn't be worrying about optimizing your processor choice very much at \nall. Get something cheaper and throw money at spindles and caching \ninstead.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 29 Oct 2009 21:39:26 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AMD, Intel and RAID controllers" } ]
[ { "msg_contents": " Hello everyone,\n\n I'm using PostgreSQL 8.3.8 running on a server with 2 Xeon CPUs, 4GB\n RAM, 4+2 disks in RAID 5 and CentOS 5.3. There's only one database\n which dumped with pgdump takes ~0.5GB.\n\n There are ~100 tables in the database and one of them (tableOne) always\n contains only a single row. There's one index on it. However performing\n update on the single row (which occurs every 60 secs) takes a\n considerably long time -- around 200ms. The system is not loaded in any\n way.\n \n The table definition is:\n\n CREATE TABLE tableOne (\n value1 BIGINT NOT NULL,\n value2 INTEGER NOT NULL,\n value3 INTEGER NOT NULL,\n value4 INTEGER NOT NULL,\n value5 INTEGER NOT NULL,\n );\n CREATE INDEX tableOne_index1 ON tableOne (value5);\n\n And the SQL query to update the _only_ row in the above table is:\n ('value5' can't be used to identify the row as I don't know it at the\n time)\n\n UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n\n And this is what EXPLAIN says on the above SQL query:\n\n DB=> EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n LOG: duration: 235.948 ms statement: EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n QUERY PLAN\n --------------------------------------------------------\n Seq Scan on jackpot (cost=0.00..1.01 rows=1 width=14)\n (1 row)\n\n What takes PostgreSQL so long? I guess I could add a fake 'id' column,\n create an index on it to identify the single row, but still -- the time\n seems quite ridiculous to me.\n\n Thanks,\n-- \n\t\tMichal\t\t([email protected])\n", "msg_date": "Fri, 2 Oct 2009 10:18:05 +0200", "msg_from": "Michal Vitecek <[email protected]>", "msg_from_op": true, "msg_subject": "updating a row in a table with only one row" }, { "msg_contents": "In response to Michal Vitecek :\n> There are ~100 tables in the database and one of them (tableOne) always\n> contains only a single row. There's one index on it. However performing\n\nIn this case, only one row, you don't need an index. Really.\n\n> update on the single row (which occurs every 60 secs) takes a\n> considerably long time -- around 200ms. The system is not loaded in any\n> way.\n> \n> UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n> \n> And this is what EXPLAIN says on the above SQL query:\n> \n> DB=> EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n> LOG: duration: 235.948 ms statement: EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n> QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on jackpot (cost=0.00..1.01 rows=1 width=14)\n> (1 row)\n\ntableOne or jackpot?\n\n\n> \n> What takes PostgreSQL so long? I guess I could add a fake 'id' column,\n> create an index on it to identify the single row, but still -- the time\n> seems quite ridiculous to me.\n\nMaybe a lot of dead tuples, can you show us the output generated from\nexplain analyse?\n\nI would suggest you to do a 'vacuum full' on this table.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n", "msg_date": "Fri, 2 Oct 2009 11:14:35 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "> There are ~100 tables in the database and one of them (tableOne) always\n> contains only a single row. There's one index on it. However performing\n> update on the single row (which occurs every 60 secs) takes a\n> considerably long time -- around 200ms. The system is not loaded in any\n> way.\n\n\nHow often is the the table VACUUMed ?\nAt the mentioned update rate, the table has a daily growth of 1440 dead row \nversions.\n\nHelder M. Vieira\n\n\n\n", "msg_date": "Fri, 2 Oct 2009 10:25:20 +0100", "msg_from": "=?iso-8859-1?Q?H=E9lder_M._Vieira?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "On Fri, Oct 2, 2009 at 4:18 AM, Michal Vitecek <[email protected]> wrote:\n>  Hello everyone,\n>\n>  I'm using PostgreSQL 8.3.8 running on a server with 2 Xeon CPUs, 4GB\n>  RAM, 4+2 disks in RAID 5 and CentOS 5.3. There's only one database\n>  which dumped with pgdump takes ~0.5GB.\n>\n>  There are ~100 tables in the database and one of them (tableOne) always\n>  contains only a single row. There's one index on it. However performing\n>  update on the single row (which occurs every 60 secs) takes a\n>  considerably long time -- around 200ms. The system is not loaded in any\n>  way.\n>\n>  The table definition is:\n>\n>  CREATE TABLE tableOne (\n>    value1      BIGINT NOT NULL,\n>    value2      INTEGER NOT NULL,\n>    value3      INTEGER NOT NULL,\n>    value4      INTEGER NOT NULL,\n>    value5      INTEGER NOT NULL,\n>  );\n>  CREATE INDEX tableOne_index1 ON tableOne (value5);\n>\n>  And the SQL query to update the _only_ row in the above table is:\n>  ('value5' can't be used to identify the row as I don't know it at the\n>  time)\n>\n>  UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>\n>  And this is what EXPLAIN says on the above SQL query:\n>\n>  DB=> EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>  LOG:  duration: 235.948 ms  statement: EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>                        QUERY PLAN\n>  --------------------------------------------------------\n>  Seq Scan on jackpot  (cost=0.00..1.01 rows=1 width=14)\n>  (1 row)\n>\n>  What takes PostgreSQL so long? I guess I could add a fake 'id' column,\n>  create an index on it to identify the single row, but still -- the time\n>  seems quite ridiculous to me.\n\nit is ridiculous. your problem is almost definitely dead rows. I\ncan't recall (and I can't find the info anywhere) if the 'hot' feature\nrequires an index to be active -- I think it does. If so, creating a\ndummy field and indexing it should resolve the problem. Can you\nconfirm the dead row issue by doing vacuum verbose and create the\nindex? please respond with your results, I'm curious. Also, is\nautovacuum on? Have you measured iowait?\n\nmerlin\n", "msg_date": "Fri, 2 Oct 2009 09:54:33 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "On Fri, Oct 2, 2009 at 9:54 AM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Oct 2, 2009 at 4:18 AM, Michal Vitecek <[email protected]> wrote:\n>>  Hello everyone,\n>>\n>>  I'm using PostgreSQL 8.3.8 running on a server with 2 Xeon CPUs, 4GB\n>>  RAM, 4+2 disks in RAID 5 and CentOS 5.3. There's only one database\n>>  which dumped with pgdump takes ~0.5GB.\n>>\n>>  There are ~100 tables in the database and one of them (tableOne) always\n>>  contains only a single row. There's one index on it. However performing\n>>  update on the single row (which occurs every 60 secs) takes a\n>>  considerably long time -- around 200ms. The system is not loaded in any\n>>  way.\n>>\n>>  The table definition is:\n>>\n>>  CREATE TABLE tableOne (\n>>    value1      BIGINT NOT NULL,\n>>    value2      INTEGER NOT NULL,\n>>    value3      INTEGER NOT NULL,\n>>    value4      INTEGER NOT NULL,\n>>    value5      INTEGER NOT NULL,\n>>  );\n>>  CREATE INDEX tableOne_index1 ON tableOne (value5);\n>>\n>>  And the SQL query to update the _only_ row in the above table is:\n>>  ('value5' can't be used to identify the row as I don't know it at the\n>>  time)\n>>\n>>  UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>\n>>  And this is what EXPLAIN says on the above SQL query:\n>>\n>>  DB=> EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>  LOG:  duration: 235.948 ms  statement: EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>                        QUERY PLAN\n>>  --------------------------------------------------------\n>>  Seq Scan on jackpot  (cost=0.00..1.01 rows=1 width=14)\n>>  (1 row)\n>>\n>>  What takes PostgreSQL so long? I guess I could add a fake 'id' column,\n>>  create an index on it to identify the single row, but still -- the time\n>>  seems quite ridiculous to me.\n>\n> it is ridiculous.  your problem is almost definitely dead rows.  I\n> can't recall (and I can't find the info anywhere) if the 'hot' feature\n> requires an index to be active -- I think it does.  If so, creating a\n> dummy field and indexing it should resolve the problem.   Can you\n> confirm the dead row issue by doing vacuum verbose and create the\n> index?  please respond with your results, I'm curious.  Also, is\n> autovacuum on?  Have you measured iowait?\n\nSince he's updating all the fields in the table, an index will\ncertainly ensure that HOT does not apply, no?\n\n...Robert\n", "msg_date": "Fri, 2 Oct 2009 13:39:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "On Fri, Oct 2, 2009 at 1:39 PM, Robert Haas <[email protected]> wrote:\n> On Fri, Oct 2, 2009 at 9:54 AM, Merlin Moncure <[email protected]> wrote:\n>> it is ridiculous.  your problem is almost definitely dead rows.  I\n>> can't recall (and I can't find the info anywhere) if the 'hot' feature\n>> requires an index to be active -- I think it does.  If so, creating a\n>> dummy field and indexing it should resolve the problem.   Can you\n>> confirm the dead row issue by doing vacuum verbose and create the\n>> index?  please respond with your results, I'm curious.  Also, is\n>> autovacuum on?  Have you measured iowait?\n>\n> Since he's updating all the fields in the table, an index will\n> certainly ensure that HOT does not apply, no?\n\nyou're right...I missed that he put an index on value5 (why?). That's\nwhat's killing him.\n\nmerlin\n", "msg_date": "Fri, 2 Oct 2009 14:18:38 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "Robert Haas wrote:\n> On Fri, Oct 2, 2009 at 9:54 AM, Merlin Moncure <[email protected]> wrote:\n>> On Fri, Oct 2, 2009 at 4:18 AM, Michal Vitecek <[email protected]> wrote:\n>>> Hello everyone,\n>>>\n>>> I'm using PostgreSQL 8.3.8 running on a server with 2 Xeon CPUs, 4GB\n>>> RAM, 4+2 disks in RAID 5 and CentOS 5.3. There's only one database\n>>> which dumped with pgdump takes ~0.5GB.\n>>>\n>>> There are ~100 tables in the database and one of them (tableOne) always\n>>> contains only a single row. There's one index on it. However performing\n>>> update on the single row (which occurs every 60 secs) takes a\n>>> considerably long time -- around 200ms. The system is not loaded in any\n>>> way.\n>>>\n>>> The table definition is:\n>>>\n>>> CREATE TABLE tableOne (\n>>> value1 BIGINT NOT NULL,\n>>> value2 INTEGER NOT NULL,\n>>> value3 INTEGER NOT NULL,\n>>> value4 INTEGER NOT NULL,\n>>> value5 INTEGER NOT NULL,\n>>> );\n>>> CREATE INDEX tableOne_index1 ON tableOne (value5);\n>>>\n>>> And the SQL query to update the _only_ row in the above table is:\n>>> ('value5' can't be used to identify the row as I don't know it at the\n>>> time)\n>>>\n>>> UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>>\n>>> And this is what EXPLAIN says on the above SQL query:\n>>>\n>>> DB=> EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>> LOG: duration: 235.948 ms statement: EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>> QUERY PLAN\n>>> --------------------------------------------------------\n>>> Seq Scan on jackpot (cost=0.00..1.01 rows=1 width=14)\n>>> (1 row)\n>>>\n>>> What takes PostgreSQL so long? I guess I could add a fake 'id' column,\n>>> create an index on it to identify the single row, but still -- the time\n>>> seems quite ridiculous to me.\n>> it is ridiculous. your problem is almost definitely dead rows. I\n>> can't recall (and I can't find the info anywhere) if the 'hot' feature\n>> requires an index to be active -- I think it does. If so, creating a\n>> dummy field and indexing it should resolve the problem. Can you\n>> confirm the dead row issue by doing vacuum verbose and create the\n>> index? please respond with your results, I'm curious. Also, is\n>> autovacuum on? Have you measured iowait?\n> \n> Since he's updating all the fields in the table, an index will\n> certainly ensure that HOT does not apply, no?\n\nAn extra index shouldn't hurt if you don't update the indexed dummy\ncolumn. But the existing tableOne_index1 will cause HOT to not apply, if\nvalue5 is updated. I'd suggest dropping it (and not creating any other\nindexes either), it won't do any good on a table with only one row anyway.\n\nIf the table is indeed bloated, VACUUM FULL should shrink it back. I\nwonder how it got to be that way, though. Autovacuum should keep a table\nlike that in check.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 02 Oct 2009 21:38:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "Robert Haas wrote:\n>On Fri, Oct 2, 2009 at 9:54 AM, Merlin Moncure <[email protected]> wrote:\n>> On Fri, Oct 2, 2009 at 4:18 AM, Michal Vitecek <[email protected]> wrote:\n>>> �Hello everyone,\n>>>\n>>> �I'm using PostgreSQL 8.3.8 running on a server with 2 Xeon CPUs, 4GB\n>>> �RAM, 4+2 disks in RAID 5 and CentOS 5.3. There's only one database\n>>> �which dumped with pgdump takes ~0.5GB.\n>>>\n>>> �There are ~100 tables in the database and one of them (tableOne) always\n>>> �contains only a single row. There's one index on it. However performing\n>>> �update on the single row (which occurs every 60 secs) takes a\n>>> �considerably long time -- around 200ms. The system is not loaded in any\n>>> �way.\n>>>\n>>> �The table definition is:\n>>>\n>>> �CREATE TABLE tableOne (\n>>> � �value1 � � �BIGINT NOT NULL,\n>>> � �value2 � � �INTEGER NOT NULL,\n>>> � �value3 � � �INTEGER NOT NULL,\n>>> � �value4 � � �INTEGER NOT NULL,\n>>> � �value5 � � �INTEGER NOT NULL,\n>>> �);\n>>> �CREATE INDEX tableOne_index1 ON tableOne (value5);\n>>>\n>>> �And the SQL query to update the _only_ row in the above table is:\n>>> �('value5' can't be used to identify the row as I don't know it at the\n>>> �time)\n>>>\n>>> �UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>>\n>>> �And this is what EXPLAIN says on the above SQL query:\n>>>\n>>> �DB=> EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>> �LOG: �duration: 235.948 ms �statement: EXPLAIN UPDATE tableOne SET value1 = newValue1, value2 = newValue2, value5 = newValue5;\n>>> � � � � � � � � � � � �QUERY PLAN\n>>> �--------------------------------------------------------\n>>> �Seq Scan on tableOne �(cost=0.00..1.01 rows=1 width=14)\n>>> �(1 row)\n>>>\n>>> �What takes PostgreSQL so long? I guess I could add a fake 'id' column,\n>>> �create an index on it to identify the single row, but still -- the time\n>>> �seems quite ridiculous to me.\n>>\n>> it is ridiculous. �your problem is almost definitely dead rows. �I\n>> can't recall (and I can't find the info anywhere) if the 'hot' feature\n>> requires an index to be active -- I think it does. �If so, creating a\n>> dummy field and indexing it should resolve the problem. � Can you\n>> confirm the dead row issue by doing vacuum verbose and create the\n>> index? �please respond with your results, I'm curious. �Also, is\n>> autovacuum on? �Have you measured iowait?\n\n Autovacuum is on. I have dropped the superfluous index on value5.\n \n The following is a result of running vacuum verbose analyze on the\n table after the database has been running for 3 days (it was restored\n from pgdump 3 days ago).\n\n DB=> vacuum verbose analyze tableOne;\n INFO: vacuuming \"public.tableOne\"\n INFO: \"tableOne\": found 82 removable, 1 nonremovable row versions in 1 pages\n DETAIL: 0 dead row versions cannot be removed yet.\n There were 141 unused item pointers.\n 1 pages contain useful free space.\n 0 pages are entirely empty.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\n INFO: analyzing \"public.tableOne\"\n INFO: \"tableOne\": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total rows\n LOG: duration: 23.833 ms statement: vacuum verbose analyze tableOne;\n VACUUM\n\n The problem occurs also on different tables but on tableOne this is\n most striking as it is very simple. Also I should mention that the\n problem doesn't occur every time -- but in ~1/6 cases.\n\n Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n with write-back enabled. Could it be that its internal cache becomes\n full and all disk I/O operations are delayed until it writes all\n changes to hard drives?\n \n Thanks,\n-- \n\t\tMichal Vitecek\t\t([email protected])\n", "msg_date": "Mon, 5 Oct 2009 11:17:06 +0200", "msg_from": "Michal Vitecek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek <[email protected]> wrote:\n\n>  Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n>  with write-back enabled. Could it be that its internal cache becomes\n>  full and all disk I/O operations are delayed until it writes all\n>  changes to hard drives?\n\nthat's possible...the red flag is going to be iowait. if your server\ncan't keep up with the sync demands for example, you will eventually\noutrun the write cache and you can start to see slow queries. With\nyour server though it would take in the hundreds of (write)\ntransactions per second to do that minimum.\n\nmerlin\n", "msg_date": "Mon, 5 Oct 2009 07:50:40 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "Merlin Moncure wrote:\n>On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek <[email protected]> wrote:\n>\n>> �Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n>> �with write-back enabled. Could it be that its internal cache becomes\n>> �full and all disk I/O operations are delayed until it writes all\n>> �changes to hard drives?\n>\n>that's possible...the red flag is going to be iowait. if your server\n>can't keep up with the sync demands for example, you will eventually\n>outrun the write cache and you can start to see slow queries. With\n>your server though it would take in the hundreds of (write)\n>transactions per second to do that minimum.\n\n The problem is that the server is not loaded in any way. The iowait is\n 0.62%, there's only 72 sectors written/s, but the maximum await that I\n saw was 28ms (!). Any attempts to reduce the time (I/O schedulers,\n disabling bgwriter, increasing number of checkpoints, decreasing shared\n buffers, disabling read cache on the card etc.) didn't help. After some\n 3-5m there occurs a COMMIT which takes 100-10000x longer time than\n usual. Setting fsynch to off Temporarily improved the COMMIT times\n considerably but I fear to have this option off all the time.\n\n Is anybody else using the same RAID card? I suspect the problem lies\n somewhere between the aacraid module and the card. The aacraid module\n ignores setting of the 'cache' parameter to 3 -- this should completely\n disable the SYNCHRONIZE_CACHE command.\n\n Any hints?\n\n Thanks,\n-- \n\t\tMichal Vitecek\t\t([email protected])\n", "msg_date": "Tue, 6 Oct 2009 16:59:20 +0200", "msg_from": "Michal Vitecek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "On Tue, Oct 6, 2009 at 10:59 AM, Michal Vitecek <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek <[email protected]> wrote:\n>>\n>>>  Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n>>>  with write-back enabled. Could it be that its internal cache becomes\n>>>  full and all disk I/O operations are delayed until it writes all\n>>>  changes to hard drives?\n>>\n>>that's possible...the red flag is going to be iowait. if your server\n>>can't keep up with the sync demands for example, you will eventually\n>>outrun the write cache and you can start to see slow queries.  With\n>>your server though it would take in the hundreds of (write)\n>>transactions per second to do that minimum.\n>\n>  The problem is that the server is not loaded in any way. The iowait is\n>  0.62%, there's only 72 sectors written/s, but the maximum await that I\n>  saw was 28ms (!). Any attempts to reduce the time (I/O schedulers,\n>  disabling bgwriter, increasing number of checkpoints, decreasing shared\n>  buffers, disabling read cache on the card etc.) didn't help. After some\n>  3-5m there occurs a COMMIT which takes 100-10000x longer time than\n>  usual. Setting fsynch to off Temporarily improved the COMMIT times\n>  considerably but I fear to have this option off all the time.\n>\n>  Is anybody else using the same RAID card? I suspect the problem lies\n>  somewhere between the aacraid module and the card. The aacraid module\n>  ignores setting of the 'cache' parameter to 3 -- this should completely\n>  disable the SYNCHRONIZE_CACHE command.\n\nI think you're right. One thing you can do is leave fsync on but\ndisable synchronous_commit. This is compromise between fsync on/off\n(data consistent following crash, but you may lose some transactions).\n\nWe need to know what iowait is at the precise moment you get the long\ncommit time. Throw a top, give it short update interval (like .25\nseconds), and watch.\n\nmerlin\n", "msg_date": "Tue, 6 Oct 2009 11:25:14 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "Merlin Moncure wrote:\n> On Tue, Oct 6, 2009 at 10:59 AM, Michal Vitecek <[email protected]> wrote:\n>> Merlin Moncure wrote:\n>>> On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek <[email protected]> wrote:\n>>>\n>>>> Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n>>>> with write-back enabled. Could it be that its internal cache becomes\n>>>> full and all disk I/O operations are delayed until it writes all\n>>>> changes to hard drives?\n>>> that's possible...the red flag is going to be iowait. if your server\n>>> can't keep up with the sync demands for example, you will eventually\n>>> outrun the write cache and you can start to see slow queries. With\n>>> your server though it would take in the hundreds of (write)\n>>> transactions per second to do that minimum.\n>> The problem is that the server is not loaded in any way. The iowait is\n>> 0.62%, there's only 72 sectors written/s, but the maximum await that I\n>> saw was 28ms (!). Any attempts to reduce the time (I/O schedulers,\n>> disabling bgwriter, increasing number of checkpoints, decreasing shared\n>> buffers, disabling read cache on the card etc.) didn't help. After some\n>> 3-5m there occurs a COMMIT which takes 100-10000x longer time than\n>> usual. Setting fsynch to off Temporarily improved the COMMIT times\n>> considerably but I fear to have this option off all the time.\n>>\n>> Is anybody else using the same RAID card? I suspect the problem lies\n>> somewhere between the aacraid module and the card. The aacraid module\n>> ignores setting of the 'cache' parameter to 3 -- this should completely\n>> disable the SYNCHRONIZE_CACHE command.\n> \n> I think you're right. One thing you can do is leave fsync on but\n> disable synchronous_commit. This is compromise between fsync on/off\n> (data consistent following crash, but you may lose some transactions).\n> \n> We need to know what iowait is at the precise moment you get the long\n> commit time. Throw a top, give it short update interval (like .25\n> seconds), and watch.\n\ntop(1) has a batch mode (-b) that's useful for sending results to a file.\n\nCraig\n", "msg_date": "Tue, 06 Oct 2009 08:57:51 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "Merlin Moncure wrote:\n>On Tue, Oct 6, 2009 at 10:59 AM, Michal Vitecek <[email protected]> wrote:\n>> Merlin Moncure wrote:\n>>>On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek <[email protected]> wrote:\n>>>\n>>>> �Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n>>>> �with write-back enabled. Could it be that its internal cache becomes\n>>>> �full and all disk I/O operations are delayed until it writes all\n>>>> �changes to hard drives?\n>>>\n>>>that's possible...the red flag is going to be iowait. if your server\n>>>can't keep up with the sync demands for example, you will eventually\n>>>outrun the write cache and you can start to see slow queries. �With\n>>>your server though it would take in the hundreds of (write)\n>>>transactions per second to do that minimum.\n>>\n>> �The problem is that the server is not loaded in any way. The iowait is\n>> �0.62%, there's only 72 sectors written/s, but the maximum await that I\n>> �saw was 28ms (!). Any attempts to reduce the time (I/O schedulers,\n>> �disabling bgwriter, increasing number of checkpoints, decreasing shared\n>> �buffers, disabling read cache on the card etc.) didn't help. After some\n>> �3-5m there occurs a COMMIT which takes 100-10000x longer time than\n>> �usual. Setting fsynch to off Temporarily improved the COMMIT times\n>> �considerably but I fear to have this option off all the time.\n>>\n>> �Is anybody else using the same RAID card? I suspect the problem lies\n>> �somewhere between the aacraid module and the card. The aacraid module\n>> �ignores setting of the 'cache' parameter to 3 -- this should completely\n>> �disable the SYNCHRONIZE_CACHE command.\n>\n>I think you're right. One thing you can do is leave fsync on but\n>disable synchronous_commit. This is compromise between fsync on/off\n>(data consistent following crash, but you may lose some transactions).\n>\n>We need to know what iowait is at the precise moment you get the long\n>commit time. Throw a top, give it short update interval (like .25\n>seconds), and watch.\n\n I'm writing with resolution to the problem: It was indeed caused by the\n IBM ServerRAID 8k SAS RAID card. Putting the WAL logs onto a separate\n (not in RAID 5) hard drive helped tremendously with the COMMIT times\n and the occurrence of the very long SQL query times dropped from 3-5min\n to ~45min where only INSERT or UPDATE queries were slow. Flashing\n firmware of the RAID card and of all hard drives fixed the problem\n altogether. To explain why we waited for so long with the firmware\n updates was because of the fact that IBM frequently puts a new version\n on their servers and then, after a day or two, replaces it with a newer\n version which fixes a critical bug introduced in the previous one.\n\n Thanks,\n-- \n\t\tMichal\t\t([email protected])\n", "msg_date": "Mon, 12 Oct 2009 11:23:39 +0200", "msg_from": "Michal Vitecek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: updating a row in a table with only one row" }, { "msg_contents": "On Mon, Oct 12, 2009 at 5:23 AM, Michal Vitecek <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>On Tue, Oct 6, 2009 at 10:59 AM, Michal Vitecek <[email protected]> wrote:\n>>> Merlin Moncure wrote:\n>>>>On Mon, Oct 5, 2009 at 5:17 AM, Michal Vitecek <[email protected]> wrote:\n>>>>\n>>>>>  Could the problem be the HW RAID card? There's ServerRAID 8k with 256MB\n>>>>>  with write-back enabled. Could it be that its internal cache becomes\n>>>>>  full and all disk I/O operations are delayed until it writes all\n>>>>>  changes to hard drives?\n>>>>\n>>>>that's possible...the red flag is going to be iowait. if your server\n>>>>can't keep up with the sync demands for example, you will eventually\n>>>>outrun the write cache and you can start to see slow queries.  With\n>>>>your server though it would take in the hundreds of (write)\n>>>>transactions per second to do that minimum.\n>>>\n>>>  The problem is that the server is not loaded in any way. The iowait is\n>>>  0.62%, there's only 72 sectors written/s, but the maximum await that I\n>>>  saw was 28ms (!). Any attempts to reduce the time (I/O schedulers,\n>>>  disabling bgwriter, increasing number of checkpoints, decreasing shared\n>>>  buffers, disabling read cache on the card etc.) didn't help. After some\n>>>  3-5m there occurs a COMMIT which takes 100-10000x longer time than\n>>>  usual. Setting fsynch to off Temporarily improved the COMMIT times\n>>>  considerably but I fear to have this option off all the time.\n>>>\n>>>  Is anybody else using the same RAID card? I suspect the problem lies\n>>>  somewhere between the aacraid module and the card. The aacraid module\n>>>  ignores setting of the 'cache' parameter to 3 -- this should completely\n>>>  disable the SYNCHRONIZE_CACHE command.\n>>\n>>I think you're right.  One thing you can do is leave fsync on but\n>>disable synchronous_commit.  This is compromise between fsync on/off\n>>(data consistent following crash, but you may lose some transactions).\n>>\n>>We need to know what iowait is at the precise moment you get the long\n>>commit time.  Throw a top, give it short update interval (like .25\n>>seconds), and watch.\n>\n>  I'm writing with resolution to the problem: It was indeed caused by the\n>  IBM ServerRAID 8k SAS RAID card. Putting the WAL logs onto a separate\n>  (not in RAID 5) hard drive helped tremendously with the COMMIT times\n>  and the occurrence of the very long SQL query times dropped from 3-5min\n>  to ~45min where only INSERT or UPDATE queries were slow. Flashing\n>  firmware of the RAID card and of all hard drives fixed the problem\n>  altogether. To explain why we waited for so long with the firmware\n>  updates was because of the fact that IBM frequently puts a new version\n>  on their servers and then, after a day or two, replaces it with a newer\n>  version which fixes a critical bug introduced in the previous one.\n\nI noticed similar behavior on a different raid controller (LSI based\nDell Perc 5). Things ran ok most of the time, but during periods of\nmoderate load and up sometimes the write back cache on the card will\nfill up and flash. During this operation the system would become\ncompletely unresponsive for 2-20 seconds if fsync was on. Needless to\nsay, on an OLTP system this is completely unacceptable. A patch by\nthe vendor later reduced but did not completely fix the problem. One\nthings about raid controllers I really don't like is that they have a\ntendency to cause the o/s to lie about iowait...really hurts you from\na diagnostic point of view.\n\nThis is why I've soured a bit on hardware raid as a concept. While\nthe tools/features/bios configuration is all nice, the raid controller\nis a black box that completely defines the performance if i/o bound\nsystems...that's a little scary. Note I'm not advising to run out and\ngo install software raid everywhere, but these are certainly\ncautionary tales.\n\nmerlin\n", "msg_date": "Mon, 12 Oct 2009 10:25:18 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: updating a row in a table with only one row" } ]
[ { "msg_contents": "[I got no response on -general for a few days so I'm trying here]\n\nWhen we upgraded from linux-2.6.24 to linux-2.6.27, our pg_dump\nduration increased by 20% from 5 hours to 6. My first attempt at\nresolution was to boot with elevator=deadline. However that's\nactually the default IO scheduler in both kernels.\n\nThe two dmesg's are at:\nhttps://www.norchemlab.com/tmp/linux-2.6.24-22.45-server\nhttps://www.norchemlab.com/tmp/linux-2.6.27-14.41-server\n\nThe database partition is: xfs / lvm / aic79xx / scsi.\n\nBooting back into the .24 kernel brings the pg_dump back down to 5\nhours (for daily 20GB output compressed by pg_dump -Fc).\n\nDoes anyone know what might be different which could cause such a\ndrastic change?\n\nThanks,\nJustin\n", "msg_date": "Fri, 2 Oct 2009 12:58:12 -0700", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "dump time increase by 1h with new kernel" }, { "msg_contents": "[I got no response on -general for a few days so I'm trying here]\n\nWhen we upgraded from linux-2.6.24 to linux-2.6.27, our pg_dump\nduration increased by 20% from 5 hours to 6. My first attempt at\nresolution was to boot with elevator=deadline. However that's\nactually the default IO scheduler in both kernels.\n\nThe two dmesg's are at:\nhttps://www.norchemlab.com/tmp/linux-2.6.24-22.45-server\nhttps://www.norchemlab.com/tmp/linux-2.6.27-14.41-server\n\nThe database partition is: xfs / lvm / aic79xx / scsi.\n\nBooting back into the .24 kernel brings the pg_dump back down to 5\nhours (for daily 20GB output compressed by pg_dump -Fc).\n\nDoes anyone know what might be different which could cause such a\ndrastic change?\n\nThanks,\nJustin\n", "msg_date": "Fri, 2 Oct 2009 18:48:08 -0700", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "dump time increase by 1h with new kernel" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> When we upgraded from linux-2.6.24 to linux-2.6.27, our pg_dump\n> duration increased by 20% from 5 hours to 6.\n\nWouldn't be the first time the kernel guys broke something :-(\nI think a complaint to your kernel supplier is in order.\n\nIn a coincidence, the first item in the changelog for this week's\nFedora kernel update is:\n\n* Fri Sep 25 2009 Chuck Ebbert <[email protected]> 2.6.30.8-64\n- Fix serious CFQ performance regression.\n\nThis is surely not the exact same issue you are seeing, but it does\nillustrate that performance regressions in the kernel aren't\nunheard-of.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Oct 2009 22:35:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel " }, { "msg_contents": "On Fri, Oct 2, 2009 at 7:48 PM, Justin Pryzby <[email protected]> wrote:\n> [I got no response on -general for a few days so I'm trying here]\n>\n> When we upgraded from linux-2.6.24 to linux-2.6.27, our pg_dump\n> duration increased by 20% from 5 hours to 6.  My first attempt at\n> resolution was to boot with elevator=deadline.  However that's\n> actually the default IO scheduler in both kernels.\n\nTo add to what tom said, when you post this to something like kernel\nhackers, it would really help if you could test the two other kernels\nbetween these two to tell them exactly which one(s) causes the\nregression(s). That and how you compiled them or where they came from\notherwise (fc, Ubuntu dev, yada)\n", "msg_date": "Sat, 3 Oct 2009 23:31:11 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" }, { "msg_contents": "On Fri, 2 Oct 2009, Justin Pryzby wrote:\n\n> When we upgraded from linux-2.6.24 to linux-2.6.27, our pg_dump\n> duration increased by 20% from 5 hours to 6.\n\nWhy 2.6.27 of all versions? It's one of the versions I skipped altogether \nas looking like a mess, after CFS broke everything in 2.6.23 I went right \nfrom 2.6.22 to 2.6.28 before I found things usable again. The first thing \nyou're going to hear if you try to report this in kernel land is \"is it \nstill slow on 2.6.[last stable|head]?\".\n\nIf you can try both kernel versions, the other thing you really should do \nis collect data from \"vmstat 1\" during the pg_dump period. It would help \nnarrow what area is slower.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 8 Oct 2009 01:14:52 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" }, { "msg_contents": "Hi Everyone\n\nOn Fri, Oct 02, 2009 at 12:58:12PM -0700, Justin Pryzby wrote:\n> When we upgraded from linux-2.6.24 to linux-2.6.27, our pg_dump\n> duration increased by 20% from 5 hours to 6. My first attempt at\n\nOn Sat, Oct 03, 2009 at 11:31:11PM -0600, Scott Marlowe wrote:\n> between these two to tell them exactly which one(s) causes the\n> regression(s). That and how you compiled them or where they came from\nThese are both ubuntu kernels, and there's none in-between available\nfrom their repository for testing. I could compile myself, but it\nwould have to include all the ubuntu patches (apparmor in\nparticular)..\n\nOn Thu, Oct 08, 2009 at 01:14:52AM -0400, Greg Smith wrote:\n> report this in kernel land is \"is it still slow on 2.6.[last\n> stable|head]?\".\nI could try *newer* kernels, but not from newer ubuntu releases. This\nmachine is running ubuntu 8.04 with select packages from 8.10.\nHowever it's running postgres 8.2, which isn't included with either of\nthose releases.. I tried dumping with 8.3 pg_dump, which had only\nminimal effect.\n\n> If you can try both kernel versions, the other thing you really\n> should do is collect data from \"vmstat 1\" during the pg_dump period.\n> It would help narrow what area is slower.\nI have sar running on the machine, and came up with this:\n\n07 and 30 are days of the month, 7 is last night, 30 is from\nSeptember. pg_dump starts around 9pm. tail -15 gets us to about\n10pm. On sep 30, the machine was running 2.6.24 and the dump ran\nuntil after 2am. Since we rebooted, it runs .27 and the dump runs\nuntil after 4am. So the last column shows higher rate values for\nevery metric on the 30th (under .24) except for intr.\n\nfor a in user system iowait cswch tps rtps wtps intr; do for b in 07 30; do eval t$b='`sadf /var/log/sysstat/sa$b -- -A |grep -wi \"$a\" |tail -15 |awk \"{sum+=\\\\$NF}END{print sum/NR}\"`'; done; printf \"%-6s %4.4s %4.4s %5.5s\\n\" $a $t30 $t07 `calc $t30/$t07`; done\n s30 o07 s30/o07\nuser 13.9 6.85 ~2.03\nsystem 0.56 0.37 ~1.52\niowait 0.61 0.52 ~1.16\ncswch 873. 672. ~1.29\nintr 121. 396. ~0.30\ntps 412. 346. ~1.19\nrtps 147. 143. ~1.02\nwtps 264. 202. ~1.30\n\nNot sure if sar can provide other data included by vmstat: IO merged\nin/out, {,soft}irq ticks?\n\nThanks,\nJustin\n", "msg_date": "Thu, 8 Oct 2009 10:44:05 -0700", "msg_from": "Justin T Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" }, { "msg_contents": "On Thu, 2009-10-08 at 10:44 -0700, Justin T Pryzby wrote:\n> Hi Everyone\n\n\nDid your scheduler change between the kernel versions? \n\n> Not sure if sar can provide other data included by vmstat: IO merged\n> in/out, {,soft}irq ticks?\n> \n> Thanks,\n> Justin\n> \n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nIf the world pushes look it in the eye and GRR. Then push back harder. - Salamander\n\n", "msg_date": "Thu, 08 Oct 2009 10:49:37 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" }, { "msg_contents": "On Thu, Oct 08, 2009 at 10:49:37AM -0700, Joshua D. Drake wrote:\n> On Thu, 2009-10-08 at 10:44 -0700, Justin T Pryzby wrote:\n> > Hi Everyone\n> Did your scheduler change between the kernel versions? \nNo, it's deadline for both.\n\nJustin\n", "msg_date": "Thu, 8 Oct 2009 13:16:03 -0700", "msg_from": "Justin T Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" }, { "msg_contents": "Justin T Pryzby <[email protected]> wrote:\n> On Thu, Oct 08, 2009 at 10:49:37AM -0700, Joshua D. Drake wrote:\n>> Did your scheduler change between the kernel versions? \n> No, it's deadline for both.\n \nHow about write barriers? I had a kernel upgrade which turned them on\nfor xfs, with unfortunate performance impacts. The xfs docs\nexplicitly recommend disabling it if you have a battery backed cache\nin your RAID controller.\n \n-Kevin\n", "msg_date": "Thu, 08 Oct 2009 15:37:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" }, { "msg_contents": "On Thu, Oct 08, 2009 at 03:37:39PM -0500, Kevin Grittner wrote:\n> Justin T Pryzby <[email protected]> wrote:\n> > On Thu, Oct 08, 2009 at 10:49:37AM -0700, Joshua D. Drake wrote:\n> >> Did your scheduler change between the kernel versions? \n> > No, it's deadline for both.\n> \n> How about write barriers? I had a kernel upgrade which turned them on\nDoesn't seem to be that either :(\n\n[ 55.120073] Filesystem \"dm-0\": Disabling barriers, trial barrier write failed\ncrb2-db2 (254, 0)\n/dev/mapper/crb2-db2 on /media/database\n\nJustin\n", "msg_date": "Thu, 8 Oct 2009 14:15:36 -0700", "msg_from": "Justin T Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dump time increase by 1h with new kernel" } ]
[ { "msg_contents": "All:\n\nWe have a web-application which is growing ... fast. We're currently\nrunning on (1) quad-core Xeon 2.0Ghz with a RAID-1 setup, and 8GB of RAM.\n\nOur application collects a lot of sensor data, which means that we have 1\ntable which has about 8 million rows, and we're adding about 2.5 million\nrows per month.\n\nThe problem is, this next year we're anticipating significant growth,\nwhere we may be adding more like 20 million rows per month (roughly 15GB\nof data).\n\nA row of data might have:\n The system identifier (int)\n Date/Time read (timestamp)\n Sensor identifier (int)\n Data Type (int)\n Data Value (double)\n\nThe nasty part of this problem is that the data needs to be \"readily\"\navailable for reports, and we cannot consolidate the data for reporting\npurposes.\n\nWe generate real time graphs from this data, usually running reports\nacross multiple date/time ranges for any given system. Reports and graphs\ndo not span more than 1 system, and we have indexes on the applicable\ncolumns.\n\nI know we need a LOT of RAM (as much as we can afford), and we're looking\nat a couple of Nehalem systems w/ a large, and fast, RAID-10 disk set up.\n\nSo far, we're seeing some slowness in reading from our table - queries are\nin the \"seconds\" range. No issues, yet, with inserting volumes of data.\n\nTwo questions:\n\n1. Other than partitioning (by system, and/or date), and splitting up the\ndata into multiple tables (by data type), what could be done within\nPostgresql to help with this type of set up (1 large table)?\n\n2. Before going out and buying a beast of a system, we'd like to get some\nidea of performance on a \"high-end\" system. We may need to split this up,\nor move into some other type of architecture. Do you know of anyone who\nwould let us \"play\" with a couple of systems to see what would be an\napplicable purchase?\n\nThanks!\n\n\n--\nAnthony\n\n", "msg_date": "Sun, 4 Oct 2009 17:45:59 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Speed / Server" }, { "msg_contents": "On Sun, Oct 4, 2009 at 4:45 PM, <[email protected]> wrote:\n> All:\n>\n> We have a web-application which is growing ... fast.  We're currently\n> running on (1) quad-core Xeon 2.0Ghz with a RAID-1 setup, and 8GB of RAM.\n>\n> Our application collects a lot of sensor data, which means that we have 1\n> table which has about 8 million rows, and we're adding about 2.5 million\n> rows per month.\n>\n> The problem is, this next year we're anticipating significant growth,\n> where we may be adding more like 20 million rows per month (roughly 15GB\n> of data).\n>\n> A row of data might have:\n>  The system identifier (int)\n>  Date/Time read (timestamp)\n>  Sensor identifier (int)\n>  Data Type (int)\n>  Data Value (double)\n>\n> The nasty part of this problem is that the data needs to be \"readily\"\n> available for reports, and we cannot consolidate the data for reporting\n> purposes.\n>\n> We generate real time graphs from this data, usually running reports\n> across multiple date/time ranges for any given system.  Reports and graphs\n> do not span more than 1 system, and we have indexes on the applicable\n> columns.\n>\n> I know we need a LOT of RAM (as much as we can afford), and we're looking\n> at a couple of Nehalem systems w/ a large, and fast, RAID-10 disk set up.\n>\n> So far, we're seeing some slowness in reading from our table - queries are\n> in the \"seconds\" range.  No issues, yet, with inserting volumes of data.\n>\n> Two questions:\n>\n> 1.  Other than partitioning (by system, and/or date), and splitting up the\n> data into multiple tables (by data type), what could be done within\n> Postgresql to help with this type of set up (1 large table)?\n>\n> 2.  Before going out and buying a beast of a system, we'd like to get some\n> idea of performance on a \"high-end\" system.  We may need to split this up,\n> or move into some other type of architecture.  Do you know of anyone who\n> would let us \"play\" with a couple of systems to see what would be an\n> applicable purchase?\n\nMost of the producers of big bad database servers have a trial period\nyou can try stuff out for. My supplier has something like a 30 day\ntrial. I'm sure the bigger the system the more they'd need to charge\nyou for playing on it then returning it.\n\nBut you should plan on partitioning to multiple db servers up front\nand save pain of conversion later on. A dual socket motherboard with\n16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a\nsimilar machine with 4 to 8 sockets is gonna be. And if you gotta go\nthere anyway, might as well spend your money on other stuff.\n", "msg_date": "Sun, 4 Oct 2009 18:20:11 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "> But you should plan on partitioning to multiple db servers up front\n> and save pain of conversion later on. A dual socket motherboard with\n> 16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a\n> similar machine with 4 to 8 sockets is gonna be. And if you gotta go\n> there anyway, might as well spend your money on other stuff.\n>\n>\nI agree. If you can partition that sensor data across multiple DBs and have\nyour application do the knitting you might be better off. If I may be so\nbold, you might want to look at splaying the systems out across your\nbackends. I'm just trying to think of a dimension that you won't want to\naggregate across frequently.\n\nOn the other hand, one of these 16 to 32 SAS drive systems with a raid card\nwill likely get you a long way.\n\n\nBut you should plan on partitioning to multiple db servers up front\nand save pain of conversion later on.  A dual socket motherboard with\n16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a\nsimilar machine with 4 to 8 sockets is gonna be.  And if you gotta go\nthere anyway, might as well spend your money on other stuff.\nI agree.  If you can partition that sensor data across multiple DBs and have your application do the knitting you might be better off.  If I may be so bold, you might want to look at splaying the systems out across your backends.  I'm just trying to think of a dimension that you won't want to aggregate across frequently.\nOn the other hand, one of these 16 to 32 SAS drive systems with a raid card will likely get you a long way.", "msg_date": "Mon, 5 Oct 2009 09:30:55 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Mon, Oct 5, 2009 at 7:30 AM, Nikolas Everett <[email protected]> wrote:\n>\n>> But you should plan on partitioning to multiple db servers up front\n>> and save pain of conversion later on.  A dual socket motherboard with\n>> 16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a\n>> similar machine with 4 to 8 sockets is gonna be.  And if you gotta go\n>> there anyway, might as well spend your money on other stuff.\n>>\n>\n> I agree.  If you can partition that sensor data across multiple DBs and have\n> your application do the knitting you might be better off.  If I may be so\n> bold, you might want to look at splaying the systems out across your\n> backends.  I'm just trying to think of a dimension that you won't want to\n> aggregate across frequently.\n\nAgreed back. If there's a logical dimension to split data on, it\nbecomes much easier to throw x machines at it than to try and build\none ubermachine to handle it all.\n\n> On the other hand, one of these 16 to 32 SAS drive systems with a raid card\n> will likely get you a long way.\n\nYes they can. We're about to have to add a third db server, cause\nthis is the load on our main slave db:\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n22 0 220 633228 229556 28432976 0 0 638 304 0 0 21\n 3 73 3 0\n19 1 220 571980 229584 28435180 0 0 96 1111 7091 9796 90\n 6 4 0 0\n20 0 220 532208 229644 28440244 0 0 140 3357 7110 9175 90\n 6 3 0 0\n19 1 220 568440 229664 28443688 0 0 146 1527 7765 10481\n90 7 3 0 0\n 9 1 220 806668 229688 28445240 0 0 99 326 6661 10326\n89 6 5 0 0\n 9 0 220 814016 229712 28446144 0 0 54 1544 7456 10283\n90 6 4 0 0\n11 0 220 782876 229744 28447628 0 0 96 406 6619 9354 90\n 5 5 0 0\n29 1 220 632624 229784 28449964 0 0 113 994 7109 9958 90\n 7 3 0 0\n\nIt's working fine. This has a 16 15k5 SAS disks. A 12 Disk RAID-10,\na 2 disk mirror for pg_xlog / OS, and two spares. It has 8 opteron\ncores and 32Gig ram. We're completely CPU bound because of the type of\napp we're running. So time for slave number 2...\n", "msg_date": "Mon, 5 Oct 2009 15:32:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "If my un-word wrapping is correct your running ~90% user cpu. Yikes. Could\nyou get away with fewer disks for this kind of thing?\n\nOn Mon, Oct 5, 2009 at 5:32 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Oct 5, 2009 at 7:30 AM, Nikolas Everett <[email protected]> wrote:\n> >\n> >> But you should plan on partitioning to multiple db servers up front\n> >> and save pain of conversion later on. A dual socket motherboard with\n> >> 16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a\n> >> similar machine with 4 to 8 sockets is gonna be. And if you gotta go\n> >> there anyway, might as well spend your money on other stuff.\n> >>\n> >\n> > I agree. If you can partition that sensor data across multiple DBs and\n> have\n> > your application do the knitting you might be better off. If I may be so\n> > bold, you might want to look at splaying the systems out across your\n> > backends. I'm just trying to think of a dimension that you won't want to\n> > aggregate across frequently.\n>\n> Agreed back. If there's a logical dimension to split data on, it\n> becomes much easier to throw x machines at it than to try and build\n> one ubermachine to handle it all.\n>\n> > On the other hand, one of these 16 to 32 SAS drive systems with a raid\n> card\n> > will likely get you a long way.\n>\n> Yes they can. We're about to have to add a third db server, cause\n> this is the load on our main slave db:\n>\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 22 0 220 633228 229556 28432976 0 0 638 304 0 0 21\n> 3 73 3 0\n> 19 1 220 571980 229584 28435180 0 0 96 1111 7091 9796 90\n> 6 4 0 0\n> 20 0 220 532208 229644 28440244 0 0 140 3357 7110 9175 90\n> 6 3 0 0\n> 19 1 220 568440 229664 28443688 0 0 146 1527 7765 10481\n> 90 7 3 0 0\n> 9 1 220 806668 229688 28445240 0 0 99 326 6661 10326\n> 89 6 5 0 0\n> 9 0 220 814016 229712 28446144 0 0 54 1544 7456 10283\n> 90 6 4 0 0\n> 11 0 220 782876 229744 28447628 0 0 96 406 6619 9354 90\n> 5 5 0 0\n> 29 1 220 632624 229784 28449964 0 0 113 994 7109 9958 90\n> 7 3 0 0\n>\n> It's working fine. This has a 16 15k5 SAS disks. A 12 Disk RAID-10,\n> a 2 disk mirror for pg_xlog / OS, and two spares. It has 8 opteron\n> cores and 32Gig ram. We're completely CPU bound because of the type of\n> app we're running. So time for slave number 2...\n>\n\nIf my un-word wrapping is correct your running ~90% user cpu.  Yikes.  Could you get away with fewer disks for this kind of thing?On Mon, Oct 5, 2009 at 5:32 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Oct 5, 2009 at 7:30 AM, Nikolas Everett <[email protected]> wrote:\n\n\n>\n>> But you should plan on partitioning to multiple db servers up front\n>> and save pain of conversion later on.  A dual socket motherboard with\n>> 16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a\n>> similar machine with 4 to 8 sockets is gonna be.  And if you gotta go\n>> there anyway, might as well spend your money on other stuff.\n>>\n>\n> I agree.  If you can partition that sensor data across multiple DBs and have\n> your application do the knitting you might be better off.  If I may be so\n> bold, you might want to look at splaying the systems out across your\n> backends.  I'm just trying to think of a dimension that you won't want to\n> aggregate across frequently.\n\nAgreed back.  If there's a logical dimension to split data on, it\nbecomes much easier to throw x machines at it than to try and build\none ubermachine to handle it all.\n\n> On the other hand, one of these 16 to 32 SAS drive systems with a raid card\n> will likely get you a long way.\n\nYes they can.  We're about to have to add a third db server, cause\nthis is the load on our main slave db:\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st\n22  0    220 633228 229556 28432976    0    0   638   304    0    0 21\n 3 73  3  0\n19  1    220 571980 229584 28435180    0    0    96  1111 7091 9796 90\n 6  4  0  0\n20  0    220 532208 229644 28440244    0    0   140  3357 7110 9175 90\n 6  3  0  0\n19  1    220 568440 229664 28443688    0    0   146  1527 7765 10481\n90  7  3  0  0\n 9  1    220 806668 229688 28445240    0    0    99   326 6661 10326\n89  6  5  0  0\n 9  0    220 814016 229712 28446144    0    0    54  1544 7456 10283\n90  6  4  0  0\n11  0    220 782876 229744 28447628    0    0    96   406 6619 9354 90\n 5  5  0  0\n29  1    220 632624 229784 28449964    0    0   113   994 7109 9958 90\n 7  3  0  0\n\nIt's working fine.  This has a 16 15k5 SAS disks.  A 12 Disk RAID-10,\na 2 disk mirror for pg_xlog / OS, and two spares. It has 8 opteron\ncores and 32Gig ram. We're completely CPU bound because of the type of\napp we're running.  So time for slave number 2...", "msg_date": "Tue, 6 Oct 2009 09:21:08 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Tue, Oct 6, 2009 at 7:21 AM, Nikolas Everett <[email protected]> wrote:\n> If my un-word wrapping is correct your running ~90% user cpu.  Yikes.  Could\n> you get away with fewer disks for this kind of thing?\n\nProbably, but the same workload on a 6 disk RAID-10 is 20% or so\nIOWAIT. So somewhere between 6 and 12 disks we go from significant\nIOWAIT to nearly none. Given that CPU bound workloads deteriorate\nmore gracefully than IO Bound, I'm pretty happy having enough extra IO\nbandwidth on this machine.\n", "msg_date": "Tue, 6 Oct 2009 08:26:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Tue, Oct 6, 2009 at 8:26 AM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Oct 6, 2009 at 7:21 AM, Nikolas Everett <[email protected]> wrote:\n>> If my un-word wrapping is correct your running ~90% user cpu.  Yikes.  Could\n>> you get away with fewer disks for this kind of thing?\n>\n> Probably, but the same workload on a 6 disk RAID-10 is 20% or so\n> IOWAIT.  So somewhere between 6 and 12 disks we go from significant\n> IOWAIT to nearly none.  Given that CPU bound workloads deteriorate\n> more gracefully than IO Bound, I'm pretty happy having enough extra IO\n> bandwidth on this machine.\n\nnote that spare IO also means we can subscribe a slony slave midday or\nrun a query on a large data set midday and not overload our servers.\nSpare CPU capacity is nice, spare IO is a necessity.\n", "msg_date": "Tue, 6 Oct 2009 13:28:59 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Oct 6, 2009 at 8:26 AM, Scott Marlowe <[email protected]> wrote:\n> \n>> On Tue, Oct 6, 2009 at 7:21 AM, Nikolas Everett <[email protected]> wrote:\n>> \n>>> If my un-word wrapping is correct your running ~90% user cpu. Yikes. Could\n>>> you get away with fewer disks for this kind of thing?\n>>> \n>> Probably, but the same workload on a 6 disk RAID-10 is 20% or so\n>> IOWAIT. So somewhere between 6 and 12 disks we go from significant\n>> IOWAIT to nearly none. Given that CPU bound workloads deteriorate\n>> more gracefully than IO Bound, I'm pretty happy having enough extra IO\n>> bandwidth on this machine.\n>> \n>\n> note that spare IO also means we can subscribe a slony slave midday or\n> run a query on a large data set midday and not overload our servers.\n> Spare CPU capacity is nice, spare IO is a necessity.\n>\n> \nMore importantly when you run out of I/O bandwidth \"bad things\" tend to\nhappen very quickly; the degradation of performance when you hit the IO\nwall is extreme to the point of being essentially a \"zeropoint event.\"\n\n-- Karl", "msg_date": "Tue, 06 Oct 2009 14:59:17 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Tue, Oct 6, 2009 at 1:59 PM, Karl Denninger <[email protected]> wrote:\n>\n> More importantly when you run out of I/O bandwidth \"bad things\" tend to\n> happen very quickly; the degradation of performance when you hit the IO wall\n> is extreme to the point of being essentially a \"zeropoint event.\"\n\nOr as I like to put it IO bandwidth has sharp knees.\n", "msg_date": "Tue, 6 Oct 2009 14:35:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Sun, Oct 4, 2009 at 6:45 PM, <[email protected]> wrote:\n> All:\n>\n> We have a web-application which is growing ... fast.  We're currently\n> running on (1) quad-core Xeon 2.0Ghz with a RAID-1 setup, and 8GB of RAM.\n>\n> Our application collects a lot of sensor data, which means that we have 1\n> table which has about 8 million rows, and we're adding about 2.5 million\n> rows per month.\n>\n> The problem is, this next year we're anticipating significant growth,\n> where we may be adding more like 20 million rows per month (roughly 15GB\n> of data).\n>\n> A row of data might have:\n>  The system identifier (int)\n>  Date/Time read (timestamp)\n>  Sensor identifier (int)\n>  Data Type (int)\n>  Data Value (double)\n\nOne approach that can sometimes help is to use arrays to pack data.\nArrays may or may not work for the data you are collecting: they work\nbest when you always pull the entire array for analysis and not a\nparticular element of the array. Arrays work well because they pack\nmore data into index fetches and you get to skip the 20 byte tuple\nheader. That said, they are an 'optimization trade off'...you are\nmaking one type of query fast at the expense of others.\n\nIn terms of hardware, bulking up memory will only get you so\nfar...sooner or later you have to come to terms with the fact that you\nare dealing with 'big' data and need to make sure your storage can cut\nthe mustard. Your focus on hardware upgrades should probably be size\nand quantity of disk drives in a big raid 10.\n\nSingle user or 'small number of user' big data queries tend to\nbenefit more from fewer core, fast cpus.\n\nAlso, with big data, you want to make sure your table design and\nindexing strategy is as tight as possible.\n\nmerlin\n", "msg_date": "Tue, 6 Oct 2009 17:16:02 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Tue, 2009-10-06 at 17:16 -0400, Merlin Moncure wrote:\n> On Sun, Oct 4, 2009 at 6:45 PM, <[email protected]> wrote:\n> > All:\n> >\n> > We have a web-application which is growing ... fast. We're currently\n> > running on (1) quad-core Xeon 2.0Ghz with a RAID-1 setup, and 8GB of RAM.\n> >\n> > Our application collects a lot of sensor data, which means that we have 1\n> > table which has about 8 million rows, and we're adding about 2.5 million\n> > rows per month.\n> >\n> > The problem is, this next year we're anticipating significant growth,\n> > where we may be adding more like 20 million rows per month (roughly 15GB\n> > of data).\n> >\n> > A row of data might have:\n> > The system identifier (int)\n> > Date/Time read (timestamp)\n> > Sensor identifier (int)\n> > Data Type (int)\n> > Data Value (double)\n> \n> One approach that can sometimes help is to use arrays to pack data.\n> Arrays may or may not work for the data you are collecting: they work\n> best when you always pull the entire array for analysis and not a\n> particular element of the array. Arrays work well because they pack\n> more data into index fetches and you get to skip the 20 byte tuple\n> header. That said, they are an 'optimization trade off'...you are\n> making one type of query fast at the expense of others.\n> \n> In terms of hardware, bulking up memory will only get you so\n> far...sooner or later you have to come to terms with the fact that you\n> are dealing with 'big' data and need to make sure your storage can cut\n> the mustard. Your focus on hardware upgrades should probably be size\n> and quantity of disk drives in a big raid 10.\n> \n> Single user or 'small number of user' big data queries tend to\n> benefit more from fewer core, fast cpus.\n> \n> Also, with big data, you want to make sure your table design and\n> indexing strategy is as tight as possible.\n\nThanks for all of the input. One thing we're going to try is to slice\nup the data based on the data type ... so that we can spread the data\nrows into about 15 different tables. This should produce 15 tables, the\nlargest which will have about 50% of the data, with the rest having an\nuneven distribution of the remaining data.\n\nMost of the graphs / reports that we're doing need to only use one type\nof data at a time, but several will need to stitch / combine data from\nmultiple data tables.\n\nThese combined with some new processors, and a fast RAID-10 system\nshould give us what we need going forward.\n\nThanks again!\n\n\n--\nAnthony\n\n", "msg_date": "Tue, 06 Oct 2009 17:17:36 -0500", "msg_from": "Anthony Presley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "On Sun, 4 Oct 2009, [email protected] wrote:\n\n> The nasty part of this problem is that the data needs to be \"readily\"\n> available for reports, and we cannot consolidate the data for reporting\n> purposes.\n\nJust because you have to store the detailed data doesn't mean you can't \nstore a conslidated view on it too. Have you considered driving the \nprimary reporting off of materialized views, so you only compute those \nonce?\n\n> I know we need a LOT of RAM (as much as we can afford), and we're looking\n> at a couple of Nehalem systems w/ a large, and fast, RAID-10 disk set up.\n\nThere is a lot of variation in RAID-10 setups that depends on the \ncontroller used. Make sure you're careful to consider the controller card \nand performance of its battery-backed cache a critical component here; \nperformance does not scale well with additional drives if your controller \nisn't good.\n\nWhat card are you using now for your RAID-1 implementation?\n\n> 1. Other than partitioning (by system, and/or date), and splitting up the\n> data into multiple tables (by data type), what could be done within\n> Postgresql to help with this type of set up (1 large table)?\n\nThis seems like a perfect fit for partitioning by date.\n\n> 2. Before going out and buying a beast of a system, we'd like to get some\n> idea of performance on a \"high-end\" system. We may need to split this up,\n> or move into some other type of architecture. Do you know of anyone who\n> would let us \"play\" with a couple of systems to see what would be an\n> applicable purchase?\n\nFind vendors who sell things you like and ask if they have an eval system \navailable. As prices move up, those become more common.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 6 Oct 2009 20:48:30 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" }, { "msg_contents": "> -----Original Message-----\n<snip>\n> >\n> > The problem is, this next year we're anticipating significant growth,\n> > where we may be adding more like 20 million rows per month (roughly\n> 15GB\n> > of data).\n> >\n> > A row of data might have:\n> >  The system identifier (int)\n> >  Date/Time read (timestamp)\n> >  Sensor identifier (int)\n> >  Data Type (int)\n> >  Data Value (double)\n> \n> One approach that can sometimes help is to use arrays to pack data.\n> Arrays may or may not work for the data you are collecting: they work\n> best when you always pull the entire array for analysis and not a\n> particular element of the array. Arrays work well because they pack\n> more data into index fetches and you get to skip the 20 byte tuple\n> header. That said, they are an 'optimization trade off'...you are\n> making one type of query fast at the expense of others.\n> \n\nI recently used arrays for a 'long and thin' table very like those\ndescribed here. The tuple header became increasingly significant in our\ncase. There are some details in my post:\n\nhttp://www.nabble.com/optimizing-for-temporal-data-behind-a-view-td25490818.html\n\nAs Merlin points out: one considerable side-effect of using arrays \nis that it reduces the sort of queries which we could perform - \ni.e. querying data is was in an array becomes costly. \nSo, we needed to make sure our user scenarios were (requirements) \nwere well understood.\n\nrichard\n\n-- \nScanned by iCritical.\n", "msg_date": "Wed, 7 Oct 2009 09:40:39 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed / Server" } ]
[ { "msg_contents": "\nmnw21-modmine-r13features-copy=# select count(*) from project;\n count\n-------\n 10\n(1 row)\n\nmnw21-modmine-r13features-copy=# select count(*) from intermineobject;\n count\n----------\n 26344616\n(1 row)\n\nmnw21-modmine-r13features-copy=# \\d intermineobject;\nTable \"public.intermineobject\"\n Column | Type | Modifiers\n--------+---------+-----------\n object | text |\n id | integer | not null\n class | text |\nIndexes:\n \"intermineobject_pkey\" UNIQUE, btree (id)\n\nmnw21-modmine-r13features-copy=# explain select * from project where id \nNOT IN (SELECT id FROM intermineobject);\n QUERY PLAN\n------------------------------------------------------------------------------------\n Seq Scan on project (cost=1476573.93..1476575.05 rows=5 width=183)\n Filter: (NOT (hashed SubPlan 1))\n SubPlan 1\n -> Seq Scan on intermineobject (cost=0.00..1410720.74 rows=26341274 width=4)\n(4 rows)\n\nThis query plan seems to me to be a little slow. Surely it could iterate \nthrough the ten project rows and perform ten index lookups in the big \ntable?\n\nMatthew\n\n-- \n Riker: Our memory pathways have become accustomed to your sensory input.\n Data: I understand - I'm fond of you too, Commander. And you too Counsellor\n", "msg_date": "Mon, 5 Oct 2009 14:52:13 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Query plan for NOT IN" }, { "msg_contents": "On Mon, Oct 5, 2009 at 2:52 PM, Matthew Wakeling <[email protected]>wrote:\n\n>\n> mnw21-modmine-r13features-copy=# select count(*) from project;\n> count\n> -------\n> 10\n> (1 row)\n>\n> mnw21-modmine-r13features-copy=# select count(*) from intermineobject;\n> count\n> ----------\n> 26344616\n> (1 row)\n>\n> mnw21-modmine-r13features-copy=# \\d intermineobject;\n> Table \"public.intermineobject\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> object | text |\n> id | integer | not null\n> class | text |\n> Indexes:\n> \"intermineobject_pkey\" UNIQUE, btree (id)\n>\n> mnw21-modmine-r13features-copy=# explain select * from project where id NOT\n> IN (SELECT id FROM intermineobject);\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------\n> Seq Scan on project (cost=1476573.93..1476575.05 rows=5 width=183)\n> Filter: (NOT (hashed SubPlan 1))\n> SubPlan 1\n> -> Seq Scan on intermineobject (cost=0.00..1410720.74 rows=26341274\n> width=4)\n> (4 rows)\n>\n> This query plan seems to me to be a little slow. Surely it could iterate\n> through the ten project rows and perform ten index lookups in the big table?\n>\n>\ntry using join instead of 'not in'..\n\n\nselect p.* from project p left join intermineobject i on i.id=p.id where\ni.id is null;\n\n\n-- \nGJ\n\nOn Mon, Oct 5, 2009 at 2:52 PM, Matthew Wakeling <[email protected]> wrote:\n\nmnw21-modmine-r13features-copy=# select count(*) from project;\n count\n-------\n    10\n(1 row)\n\nmnw21-modmine-r13features-copy=# select count(*) from intermineobject;\n  count\n----------\n 26344616\n(1 row)\n\nmnw21-modmine-r13features-copy=# \\d intermineobject;\nTable \"public.intermineobject\"\n Column |  Type   | Modifiers\n--------+---------+-----------\n object | text    |\n id     | integer | not null\n class  | text    |\nIndexes:\n    \"intermineobject_pkey\" UNIQUE, btree (id)\n\nmnw21-modmine-r13features-copy=# explain select * from project where id NOT IN (SELECT id FROM intermineobject);\n                                     QUERY PLAN\n------------------------------------------------------------------------------------\n Seq Scan on project  (cost=1476573.93..1476575.05 rows=5 width=183)\n   Filter: (NOT (hashed SubPlan 1))\n   SubPlan 1\n     ->  Seq Scan on intermineobject  (cost=0.00..1410720.74 rows=26341274 width=4)\n(4 rows)\n\nThis query plan seems to me to be a little slow. Surely it could iterate through the ten project rows and perform ten index lookups in the big table?\n try using join instead of 'not in'..select p.* from project p left join intermineobject i on i.id=p.id where i.id is null;\n-- GJ", "msg_date": "Mon, 5 Oct 2009 14:56:05 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "On Mon, 5 Oct 2009, Grzegorz Jaśkiewicz wrote:\n> On Mon, Oct 5, 2009 at 2:52 PM, Matthew Wakeling <[email protected]> wrote:\n> Table \"public.intermineobject\"\n>  Column |  Type   | Modifiers\n> --------+---------+-----------\n>  object | text    |\n>  id     | integer | not null\n>  class  | text    |\n> Indexes:\n>    \"intermineobject_pkey\" UNIQUE, btree (id)\n>\n> mnw21-modmine-r13features-copy=# explain select * from project where id NOT\n> IN (SELECT id FROM intermineobject);\n>  \n> try using join instead of 'not in'..\n> \n> select p.* from project p left join intermineobject i on i.id=p.id where i.id is null;\n\nYes, that does work, but only because id is NOT NULL. I thought Postgres \n8.4 had had a load of these join types unified to make it less important \nhow the query is written?\n\nMatthew\n\n-- \n I'm always interested when [cold callers] try to flog conservatories.\n Anyone who can actually attach a conservatory to a fourth floor flat\n stands a marginally better than average chance of winning my custom.\n (Seen on Usenet)", "msg_date": "Mon, 5 Oct 2009 14:59:59 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "2009/10/5 Matthew Wakeling <[email protected]>\n\n>\n> Yes, that does work, but only because id is NOT NULL. I thought Postgres\n> 8.4 had had a load of these join types unified to make it less important how\n> the query is written?\n>\n\nwell, as a rule of thumb - unless you can't think of a default value of\ncolumn - don't use nulls. So using nulls as default 'idunno' - is a bad\npractice, but everybody's opinion on that differ.\n\nBut back on a subject, postgresql is very very poor performance wise with\n[NOT] IN () type of constructs. So if you can, avoid them, and learn to use\njoins.\n\n\n\n-- \nGJ\n\n2009/10/5 Matthew Wakeling <[email protected]>\n\nYes, that does work, but only because id is NOT NULL. I thought Postgres 8.4 had had a load of these join types unified to make it less important how the query is written?\nwell, as a rule of thumb - unless you can't think of a default value of column - don't use nulls. So using nulls as default 'idunno' - is a bad practice, but everybody's opinion on that differ.\nBut back on a subject, postgresql is very very poor performance wise with [NOT] IN () type of constructs. So if you can, avoid them, and learn to use joins. -- GJ", "msg_date": "Mon, 5 Oct 2009 15:06:41 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Yes, that does work, but only because id is NOT NULL. I thought Postgres \n> 8.4 had had a load of these join types unified to make it less important \n> how the query is written?\n\nNOT IN is not easily optimizable because of its odd behavior in the\npresence of nulls. Use NOT EXISTS instead, or that left join hack.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Oct 2009 10:30:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN " }, { "msg_contents": "Grzegorz Jaśkiewicz wrote:\n> \n> well, as a rule of thumb - unless you can't think of a default value of \n> column - don't use nulls. So using nulls as default 'idunno' - is a bad \n> practice, but everybody's opinion on that differ.\n\nI don't understand this point of view. The concept of null was \nintroduced into the SQL vernacular by Codd and Date expressly to \nrepresent unknown values.\n\n-- \nGuy Rouillier\n", "msg_date": "Mon, 05 Oct 2009 15:35:05 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "On Mon, Oct 5, 2009 at 8:35 PM, Guy Rouillier <[email protected]>wrote:\n\n> Grzegorz Jaśkiewicz wrote:\n>\n>>\n>> well, as a rule of thumb - unless you can't think of a default value of\n>> column - don't use nulls. So using nulls as default 'idunno' - is a bad\n>> practice, but everybody's opinion on that differ.\n>>\n>\n> I don't understand this point of view. The concept of null was introduced\n> into the SQL vernacular by Codd and Date expressly to represent unknown\n> values.\n>\n> Yes, unknown. So as long as you know the default value of field, you should\nset it to such.\n\nFor instance, if by default your account balance is 0, you should set it to\n0, not leave it as null, etc. Other example, if client doesn't have\ndescription - leave it as blank '' string, instead of null.\n\nOn the other hand, if you want to denote that the value wasn't set - use\nnull, but use it wisely. Hence, I personally think that DEFAULT value (in\ncreate table) should be compulsory, and 'DEFAULT NULL' an option, that you\nwould have to choose.\n\nNot to mention other (valid in this case) argument, that you would mostly\nuse IN/EXISTS, and/or join keys on fields that are either PK, or at least\nNOT NULL. Hence, using JOIN instead of IN/EXISTS most of the times.\nOne of My few personal wishes, ever since I started to use postgresql - is\nthat it could rewrite IN/EXISTS into JOIN - when possible (that is, when\ncolumns are NOT NULL).\n\n\n-- \nGJ\n\nOn Mon, Oct 5, 2009 at 8:35 PM, Guy Rouillier <[email protected]> wrote:\nGrzegorz Jaśkiewicz wrote:\n\n\nwell, as a rule of thumb - unless you can't think of a default value of column - don't use nulls. So using nulls as default 'idunno' - is a bad practice, but everybody's opinion on that differ.\n\n\nI don't understand this point of view.  The concept of null was introduced into the SQL vernacular by Codd and Date expressly to represent unknown values.Yes, unknown. So as long as you know the default value of field, you should set it to such. \nFor instance, if by default your account balance is 0, you should set it to 0, not leave it as null, etc. Other example, if client doesn't have description - leave it as blank '' string, instead of null. \nOn the other hand, if you want to denote that the value wasn't set - use null, but use it wisely. Hence, I personally think that DEFAULT value (in create table) should be compulsory, and 'DEFAULT NULL' an option, that you would have to choose. \nNot to mention other (valid in this case) argument, that you would mostly use IN/EXISTS, and/or join keys on fields that are either PK, or at least NOT NULL. Hence, using JOIN instead of IN/EXISTS most of the times. \nOne of My few personal wishes, ever since I started to use postgresql - is that it could rewrite IN/EXISTS into JOIN - when possible (that is, when columns are NOT NULL).  -- \nGJ", "msg_date": "Wed, 7 Oct 2009 14:27:03 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "Grzegorz Jaᅵkiewicz<[email protected]> wrote:\n> Guy Rouillier <[email protected]>wrote:\n>> Grzegorz Jaᅵkiewicz wrote:\n \n>>> using nulls as default 'idunno' - is a bad practice\n \n>> I don't understand this point of view. The concept of null was\n>> introduced into the SQL vernacular by Codd and Date expressly to\n>> represent unknown values.\n \n> if by default your account balance is 0, you should set it to 0, not\n> leave it as null\n \nIf your business rules are that a new account is created with a zero\nbalance and then deposits are made, sure -- insert the account row\nwith a zero balance, *because you know it to be zero*. It's been rare\nthat I've seen anyone err on the side of using NULL in place of a\ndefault for such cases. Much more common is using, for example, 'NMI'\nin the middle name column to denote \"No Middle Initial\". Such \"magic\nvalues\" can cause no end of trouble.\n \nA failing of the SQL standard is that it uses the same mark (NULL) to\nshow the absence of a value because it is unknown as for the case\nwhere it is known that no value exists (not applicable). Codd argued\nfor a distinction there, but it hasn't come to pass, at least in the\nstandard. If anyone could suggest a way to support standard syntax\nand semantics and add extensions to support this distinction, it might\nbe another advance that would distinguish PostgreSQL from \"less\nevolved\" products. :-)\n \nNone of that changes the requirement that NOT IN must result in\nUNKNOWN if any of the values involved are NULL. You can't say that my\nbirthday is not in the set of birthdays for other subscribers to this\nlist without knowing the birthdays of all subscribers. This\ndefinition of the operator makes it hard to optimize, but setting\nunknown birthdays to some date far in the past or future, to avoid\nusing NULL, would just result in bogus results for this query as well\nas, for example, queries attempting to calculate aggregates on age.\n \n-Kevin\n", "msg_date": "Wed, 07 Oct 2009 09:39:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "Kevin Grittner wrote:\n> Grzegorz Jaᅵkiewicz<[email protected]> wrote:\n\n> A failing of the SQL standard is that it uses the same mark (NULL) to\n> show the absence of a value because it is unknown as for the case\n> where it is known that no value exists (not applicable). Codd argued\n> for a distinction there, but it hasn't come to pass, at least in the\n> standard. If anyone could suggest a way to support standard syntax\n> and semantics and add extensions to support this distinction, it might\n> be another advance that would distinguish PostgreSQL from \"less\n> evolved\" products. :-)\n\nTheoretically, the distinction already exists. If you don't know a \nperson's middle initial, then set it to null; if you know the person \ndoesn't have one, set it to the empty string.\n\nBut from a practical point of view, that wouldn't go very far. Most \n*people* equate an empty string to mean the same as null. When I wrote \nmy own data access layer years ago, I expressly checked for empty \nstrings on input and changed them to null. I did this because empty \nstrings had a nasty way of creeping into our databases; writing queries \nto produce predictable results got to be very messy.\n\n-- \nGuy Rouillier\n", "msg_date": "Wed, 07 Oct 2009 13:33:16 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "Guy Rouillier <[email protected]> wrote: \n> Kevin Grittner wrote:\n \n>> A failing of the SQL standard is that it uses the same mark (NULL)\n>> to show the absence of a value because it is unknown as for the\n>> case where it is known that no value exists (not applicable). Codd\n>> argued for a distinction there, but it hasn't come to pass, at\n>> least in the standard. If anyone could suggest a way to support\n>> standard syntax and semantics and add extensions to support this\n>> distinction, it might be another advance that would distinguish\n>> PostgreSQL from \"less evolved\" products. :-)\n> \n> Theoretically, the distinction already exists. If you don't know a \n> person's middle initial, then set it to null; if you know the\n> person doesn't have one, set it to the empty string.\n \nWell, it is arguable whether an empty string is the proper way to\nindicate that a character string based column is not applicable to a\ngiven row, but it certainly falls flat for any other types, such as\ndates or numbers; and I think there's value in having a consistent way\nto handle this. \n \n> But from a practical point of view, that wouldn't go very far. \n> Most *people* equate an empty string to mean the same as null. When\n> I wrote my own data access layer years ago, I expressly checked for\n> empty strings on input and changed them to null. I did this because\n> empty strings had a nasty way of creeping into our databases;\n> writing queries to produce predictable results got to be very messy.\n \nYeah, there's that, too.\n \nWhich leaves the issue open -- a flexible way to flag the *reason* (or\n*reasons*) for the absence of a value could be a nice enhancement, if\nsomeone could invent a good implementation. Of course, one could\nalways add a column to indicate the reason for a NULL; and perhaps\nthat would be as good as any scheme to attach reason flags to NULL. \nYou'd just have to make sure the reason column was null capable for\nthose rows where there *was* a value, which would make the reason \"not\napplicable\"....\n \n-Kevin\n", "msg_date": "Wed, 07 Oct 2009 13:17:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "Kevin Grittner wrote:\n> Which leaves the issue open -- a flexible way to flag the *reason* (or\n> *reasons*) for the absence of a value could be a nice enhancement, if\n> someone could invent a good implementation. Of course, one could\n> always add a column to indicate the reason for a NULL; and perhaps\n> that would be as good as any scheme to attach reason flags to NULL. \n> You'd just have to make sure the reason column was null capable for\n> those rows where there *was* a value, which would make the reason \"not\n> applicable\"....\n\nI'd argue that this is just a special case of a broader problem of metadata: Data about the data. For example, I could have a temperature, 40 degrees, and an error bounds, +/- 0.25 degrees. Nobody would think twice about making these separate columns. I don't see how this is any different from a person's middle initial of NULL, plus a separate column indicating \"not known\" versus \"doesn't have one\" if that distinction is important. There are many examples like this, where a simple value in one column isn't sufficient, so another column contains metadata that qualifies or clarifies the information. NULL is just one such case.\n\nBut, this should probably be on an SQL discussion board, not PG performance...\n\nCraig\n", "msg_date": "Wed, 07 Oct 2009 13:27:53 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" }, { "msg_contents": "Craig James wrote:\n> Kevin Grittner wrote:\n>> Which leaves the issue open -- a flexible way to flag the *reason* (or\n>> *reasons*) for the absence of a value could be a nice enhancement, if\n>> someone could invent a good implementation. Of course, one could\n>> always add a column to indicate the reason for a NULL; and perhaps\n>> that would be as good as any scheme to attach reason flags to NULL. \n>> You'd just have to make sure the reason column was null capable for\n>> those rows where there *was* a value, which would make the reason \"not\n>> applicable\"....\n> \n> I'd argue that this is just a special case of a broader problem of \n> metadata: Data about the data. For example, I could have a temperature, \n> 40 degrees, and an error bounds, +/- 0.25 degrees. Nobody would think \n> twice about making these separate columns. I don't see how this is any \n> different from a person's middle initial of NULL, plus a separate column \n> indicating \"not known\" versus \"doesn't have one\" if that distinction is \n> important. There are many examples like this, where a simple value in \n> one column isn't sufficient, so another column contains metadata that \n> qualifies or clarifies the information. NULL is just one such case.\n> \n> But, this should probably be on an SQL discussion board, not PG \n> performance...\n\nMost DBMSs I'm aware of use a null *byte* attached to a nullable column \nto indicate whether the column is null or not. yes/no takes one *bit*. \n That leaves 255 other possible values to describe the state of the \ncolumn. That seems preferable to adding an additional column to every \nnullable column.\n\nBut as you say, that would have to be taken up with the SQL \nstandardization bodies, and not PostgreSQL.\n\n-- \nGuy Rouillier\n", "msg_date": "Wed, 07 Oct 2009 18:47:48 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query plan for NOT IN" } ]
[ { "msg_contents": "Hi Team,\n\nThis question may have asked many times previously also, but I could not\nfind a solution for this in any post. any help on the following will be\ngreatly appreciated.\n\nWe have a PG DB with PostGIS functions. There are around 100 tables in the\nDB and almost all the tables contains 1 million records, around 5 table\ncontains more than 20 million records. The total DB size is 40GB running on\na 16GB, 2 x XEON 5420, RAID6, RHEL5 64bit machines, the questions is\n\n1. The geometry calculations which we does are very complex and it is taking\na very long time to complete. We have optimised PG config to the best, now\nwe need a mechanism to distribute these queries to multiple boxes. What is\nbest recommended way for this distributed/parallel deployment. We have tried\nPGPOOL II, but the performance is not satisfactory. Going for a try with\nGridSQL\n\n2. How we can distribute/split these large tables to multiple disks of\ndifferent nodes?\n\nThanks in advance\n\nViji\n\nHi Team,This question may have asked many times previously also, but I could not find a solution for this in any post. any help on the following will be greatly appreciated.We have a PG DB with PostGIS functions. There are around 100 tables in the DB and almost all the tables contains 1 million records, around 5 table contains more than 20 million records. The total DB size is 40GB running on a 16GB, 2 x XEON 5420, RAID6, RHEL5 64bit machines, the questions is\n    1. The geometry calculations which we does are very complex and it is taking a very long time to complete. We have optimised PG config to the best, now we need a mechanism to distribute these queries to multiple boxes. What is best recommended way for this distributed/parallel deployment. We have tried PGPOOL II, but the performance is not satisfactory. Going for a try with GridSQL\n2. How we can distribute/split these large tables to multiple disks of different nodes?Thanks in advanceViji", "msg_date": "Tue, 6 Oct 2009 00:41:07 +0530", "msg_from": "Viji V Nair <[email protected]>", "msg_from_op": true, "msg_subject": "Distributed/Parallel Computing" }, { "msg_contents": "On Mon, Oct 5, 2009 at 12:11 PM, Viji V Nair <[email protected]> wrote:\n> Hi Team,\n>\n> This question may have asked many times previously also, but I could not\n> find a solution for this in any post. any help on the following will be\n> greatly appreciated.\n>\n> We have a PG DB with PostGIS functions. There are around 100 tables in the\n> DB and almost all the tables contains 1 million records, around 5 table\n> contains more than 20 million records. The total DB size is 40GB running on\n> a 16GB, 2 x XEON 5420, RAID6, RHEL5 64bit machines, the questions is\n>\n> 1. The geometry calculations which we does are very complex and it is taking\n> a very long time to complete. We have optimised PG config to the best, now\n> we need a mechanism to distribute these queries to multiple boxes. What is\n> best recommended way for this distributed/parallel deployment. We have tried\n> PGPOOL II, but the performance is not satisfactory. Going for a try with\n> GridSQL\n\nWhat is the nature of the transactions being run? Are they primarily\nread-only other than bulk updates to the GIS data, are they OLTP in\nregards to the GIS data, or are they transactional with regards to\nother tables but read-only with respect to the GIS?\n\nJeff\n", "msg_date": "Mon, 5 Oct 2009 19:40:50 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distributed/Parallel Computing" }, { "msg_contents": "Hi Jeff,\n\nThese are bulk updates of GIS data and OLTP. For example, we are running\nsome sqls to remove specific POIs those are intersecting with others, for\nsuch exercise we need to compare and remove the data form diffrent tables\nincluding the 20M data tables.\n\nApart form these there are bulk selects (read only) which are coming form\nthe client systems also.\n\nThanks\nViji\n\nOn Tue, Oct 6, 2009 at 8:10 AM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Oct 5, 2009 at 12:11 PM, Viji V Nair <[email protected]>\n> wrote:\n> > Hi Team,\n> >\n> > This question may have asked many times previously also, but I could not\n> > find a solution for this in any post. any help on the following will be\n> > greatly appreciated.\n> >\n> > We have a PG DB with PostGIS functions. There are around 100 tables in\n> the\n> > DB and almost all the tables contains 1 million records, around 5 table\n> > contains more than 20 million records. The total DB size is 40GB running\n> on\n> > a 16GB, 2 x XEON 5420, RAID6, RHEL5 64bit machines, the questions is\n> >\n> > 1. The geometry calculations which we does are very complex and it is\n> taking\n> > a very long time to complete. We have optimised PG config to the best,\n> now\n> > we need a mechanism to distribute these queries to multiple boxes. What\n> is\n> > best recommended way for this distributed/parallel deployment. We have\n> tried\n> > PGPOOL II, but the performance is not satisfactory. Going for a try with\n> > GridSQL\n>\n> What is the nature of the transactions being run? Are they primarily\n> read-only other than bulk updates to the GIS data, are they OLTP in\n> regards to the GIS data, or are they transactional with regards to\n> other tables but read-only with respect to the GIS?\n>\n> Jeff\n>\n\nHi Jeff,These are bulk updates of GIS data and OLTP. For example, we are running some sqls to remove specific POIs those are intersecting with others, for such exercise we need to compare and remove the data form diffrent tables including the 20M data tables.\nApart form these there are bulk selects (read only) which are coming form the client systems also.ThanksVijiOn Tue, Oct 6, 2009 at 8:10 AM, Jeff Janes <[email protected]> wrote:\nOn Mon, Oct 5, 2009 at 12:11 PM, Viji V Nair <[email protected]> wrote:\n\n> Hi Team,\n>\n> This question may have asked many times previously also, but I could not\n> find a solution for this in any post. any help on the following will be\n> greatly appreciated.\n>\n> We have a PG DB with PostGIS functions. There are around 100 tables in the\n> DB and almost all the tables contains 1 million records, around 5 table\n> contains more than 20 million records. The total DB size is 40GB running on\n> a 16GB, 2 x XEON 5420, RAID6, RHEL5 64bit machines, the questions is\n>\n> 1. The geometry calculations which we does are very complex and it is taking\n> a very long time to complete. We have optimised PG config to the best, now\n> we need a mechanism to distribute these queries to multiple boxes. What is\n> best recommended way for this distributed/parallel deployment. We have tried\n> PGPOOL II, but the performance is not satisfactory. Going for a try with\n> GridSQL\n\nWhat is the nature of the transactions being run?  Are they primarily\nread-only other than bulk updates to the GIS data, are they OLTP in\nregards to the GIS data, or are they transactional with regards to\nother tables but read-only with respect to the GIS?\n\nJeff", "msg_date": "Tue, 6 Oct 2009 12:21:06 +0530", "msg_from": "Viji V Nair <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Distributed/Parallel Computing" } ]
[ { "msg_contents": "Hi ,\nI want to imporve the performance for inserting of huge data in my table .\nI have only one idex in table .\n\nFirst question - i want to know the role played by\n\n #fsync = on and\n #synchronous_commit = on\n\nThey are commented by default in 8.4 .\nWhen made like this :-\nfsync = off\nsynchronous_commit = off\n\n\nIt improve the performance :)\nand query took less time .\n\nI want to understand more in details what exactly had happened one is made\nthem \"off\" , is it dangerous to do this ? as it will not sync the data in\neach commit .\n\nPls help me out .\n\n-- \nThanks,\nKeshav Upadhyaya\n\nHi , I want to imporve  the performance for inserting of huge data in my table . I have only one idex in table . First question - i want to know the role played by  #fsync   = on    and  #synchronous_commit = on\nThey are commented by default in 8.4 . When made like this :-fsync = off              synchronous_commit = off      It improve the performance :)and query took less time . I want to understand more in details what exactly had happened  one is made them \"off\" , is it dangerous to do this ?  as it will not sync the data in each commit . \nPls help me out . -- Thanks,Keshav Upadhyaya", "msg_date": "Tue, 6 Oct 2009 12:58:01 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "What is the role of #fsync and #synchronous_commit in configuration\n\tfile ." }, { "msg_contents": ">From: keshav upadhyaya\n>Subject: [PERFORM] What is the role of #fsync and #synchronous_commit in\nconfiguration file .\n>\n>Hi , \n>I want to imporve the performance for inserting of huge data in my table .\n\n>I have only one idex in table . \n>\t\n>First question - i want to know the role played by \n>\t\n> #fsync = on and \n> #synchronous_commit = on\n>\t\n>I want to understand more in details what exactly had happened one is made\nthem \"off\" , \n>is it dangerous to do this ? as it will not sync the data in each commit .\n\n\nThe settings are described in the docs:\n\nhttp://www.postgresql.org/docs/8.4/interactive/runtime-config-wal.html\n\nIf you turn fsync off, you risk data loss in case of power or hardware\nfailure.\n\nDave\n\n\t\n\n", "msg_date": "Tue, 6 Oct 2009 07:49:37 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the role of #fsync and #synchronous_commit in\n\tconfiguration file ." }, { "msg_contents": "On Tue, Oct 6, 2009 at 3:28 AM, keshav upadhyaya <[email protected]> wrote:\n\n> First question - i want to know the role played by\n>\n>  #fsync   = on    and\n>  #synchronous_commit = on\n\nThese configurations are discussed here:\nhttp://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html\n\n> I want to understand more in details what exactly had happened  one is made\n> them \"off\" , is it dangerous to do this ?  as it will not sync the data in\n> each commit .\n>\n\nThere's plenty of discussion in the documentation and the list\narchives about the\nrisks of disabling fsync and/or synchronous commit. Here's one:\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00631.php\n\nIf you're struggling with performance issues, I'd post a detailed\ndescription of\nyour problem and what you've tried so far to the -performance list, instead of\nturning fsync off.\n\nJosh\n", "msg_date": "Tue, 6 Oct 2009 11:38:25 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the role of #fsync and #synchronous_commit in\n\tconfiguration file ." } ]
[ { "msg_contents": "Hi everyone,\n\nI am looking for a way to dump+restore a subset of a database (on another\nserver), using both selection and projection of the source tables (for\nsimplicity assume a single table).\nI understand that pg_dump will not let me do this. One way I considered is\ncreating a view with the subset definition and dumping it instead of the\noriginal table. In that case how do I restore the target table from the\ndumped view (what does pg_dump generate for a view?)? Can I still use\npg_dump to create SQL commands (vs the binary file option), and will these\nstill use COPY instead of INSERT statements?\n\nIs there another way to do this? Maybe replication? I care mostly about the\ntime needed to replicate the DB (subset), less so about temp space needed.\n\nThanks.\n\n-- Shaul\n\nHi everyone,I am looking for a way to dump+restore a subset of a database (on another server), using both selection and projection of the source tables (for simplicity assume a single table).I understand that pg_dump will not let me do this. One way I considered is creating a view with the subset definition and dumping it instead of the original table. In that case how do I restore the target table from the dumped view (what does pg_dump generate for a view?)? Can I still use pg_dump to create SQL commands (vs the binary file option), and will these still use COPY instead of INSERT statements?\nIs there another way to do this? Maybe replication? I care mostly about the time needed to replicate the DB (subset), less so about temp space needed.\nThanks.-- Shaul", "msg_date": "Tue, 6 Oct 2009 15:16:27 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Dumping + restoring a subset of a table?" }, { "msg_contents": "On Tue, Oct 06, 2009 at 03:16:27PM +0200, Shaul Dar wrote:\n> Hi everyone,\n> \n> I am looking for a way to dump+restore a subset of a database (on another\n> server), using both selection and projection of the source tables (for\n> simplicity assume a single table).\n> I understand that pg_dump will not let me do this. One way I considered is\n> creating a view with the subset definition and dumping it instead of the\n> original table. In that case how do I restore the target table from the\n> dumped view (what does pg_dump generate for a view?)? Can I still use\n> pg_dump to create SQL commands (vs the binary file option), and will these\n> still use COPY instead of INSERT statements?\n\nWhen pg_dump dumps a view, it simply creates a \"CREATE VIEW AS...\" statement;\nit doesn't copy the contents of the view as though it were a table.\n\n> Is there another way to do this? Maybe replication? I care mostly about\n> the time needed to replicate the DB (subset), less so about temp space\n> needed.\n\nIf you're doing this repeatedly with the same table, you might set up a\nreplication system to do it, but the easiest way for a one-time thing,\nprovided you're running something newer than 8.1, is to copy the results of a\nquery to a file, e.g.:\n\nCOPY (SELECT foo, bar FROM baz WHERE some_condition) TO 'some_file';\n\nYou should probably also use pg_dump to dump the schema of the table, so it's\neasy to create identically on your destination database:\n\npg_dump -s -t baz > baz.schema\n\nHaving recreated the table on the destination database, using COPY to restore\nthe selected data is straightforward:\n\nCOPY baz FROM 'some_file';\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com", "msg_date": "Tue, 6 Oct 2009 07:33:04 -0600", "msg_from": "Joshua Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dumping + restoring a subset of a table?" } ]
[ { "msg_contents": "Hi ,\n\nI want to insert multiple Rows in one shot to improve my performance .\n\n From C# code I am using ADO .net to connect to postgres .\nCurrently i am pasting the code which is not of postgres but in my dev\nenvironment similar things i am doing with Postgres.\n\nMySqlConnection mySql = new MySqlConnection();\n mySql.CreateConn();\n mySql.Command = mySql.Connection.CreateCommand();\n* mySql.Command.CommandText = \"INSERT INTO dbo.table1 (textBox1,\ntextBox2) VALUES (@textBox1, @textBox2)\";\n\n mySql.Command.Parameters.Add(\"@textBox1\", SqlDbType.VarChar);\n mySql.Command.Parameters[\"@textBox1\"].Value = TextBox1.Text;\n mySql.Command.Parameters.Add(\"@textBox2\", SqlDbType.VarChar);\n mySql.Command.Parameters[\"@textBox2\"].Value = TextBox2.Text;\n\n mySql.Command.ExecuteNonQuery();\n\n* mySql.Command.Dispose();\n mySql.Connection.Close();\n mySql.CloseConn();\n\n\nHi i have hilighted the line in which I wanted to ask doubts .\n\nCurrently i am inserting one row in one time and then executing the query .\nSo with this approach i need to execute it many times for multiple rows\ninsert because of this my database is poor in doing this each time for very\nlarge data.\n\nWhat i want here is to insert multiple rows and then executing it in one\ntime only so that it will be faster.\n\nPlease help me out in this regards .\n\n-- \nThanks,\nKeshav Upadhyaya\n\nHi , I want to insert multiple Rows in one shot to improve my performance . From C# code I am using ADO .net to connect to postgres . Currently i am pasting the code which is not of postgres but in my dev environment similar things  i am doing with Postgres. \n MySqlConnection mySql = new MySqlConnection();\n        mySql.CreateConn();        mySql.Command = mySql.Connection.CreateCommand();\n        mySql.Command.CommandText = \"INSERT INTO dbo.table1 (textBox1, textBox2) VALUES (@textBox1, @textBox2)\";\n                mySql.Command.Parameters.Add(\"@textBox1\", SqlDbType.VarChar);\n        mySql.Command.Parameters[\"@textBox1\"].Value = TextBox1.Text;        mySql.Command.Parameters.Add(\"@textBox2\", SqlDbType.VarChar);\n        mySql.Command.Parameters[\"@textBox2\"].Value = TextBox2.Text;        mySql.Command.ExecuteNonQuery();              \n         mySql.Command.Dispose();\n        mySql.Connection.Close();        mySql.CloseConn();\nHi i have hilighted the line in  which I wanted to ask doubts . Currently i am inserting one row in one time and then executing the query . So with this approach i need to execute it many times for multiple rows insert because of this my database is poor in doing this each time for very large data. \nWhat i want here is to insert multiple rows and then executing it in one time only so that it will be faster. Please help me out in this regards . -- Thanks,Keshav Upadhyaya", "msg_date": "Thu, 8 Oct 2009 12:09:36 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding mulitple rows insert in one shot using ADO .net connected\n\tto postgres" }, { "msg_contents": "Try the multi-row INSERT syntax:\n\n From the docs:\n\n To insert multiple rows using the multirow VALUES syntax:\n\nINSERT INTO films (code, title, did, date_prod, kind) VALUES\n ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),\n ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');\n\n\nBest regards,\n\nOn Thu, Oct 8, 2009 at 12:09 PM, keshav upadhyaya <[email protected]>wrote:\n\n> Hi ,\n>\n> I want to insert multiple Rows in one shot to improve my performance .\n>\n> From C# code I am using ADO .net to connect to postgres .\n> Currently i am pasting the code which is not of postgres but in my dev\n> environment similar things i am doing with Postgres.\n>\n> MySqlConnection mySql = new MySqlConnection();\n> mySql.CreateConn();\n> mySql.Command = mySql.Connection.CreateCommand();\n> * mySql.Command.CommandText = \"INSERT INTO dbo.table1 (textBox1,\n> textBox2) VALUES (@textBox1, @textBox2)\";\n>\n> mySql.Command.Parameters.Add(\"@textBox1\", SqlDbType.VarChar);\n> mySql.Command.Parameters[\"@textBox1\"].Value = TextBox1.Text;\n> mySql.Command.Parameters.Add(\"@textBox2\", SqlDbType.VarChar);\n> mySql.Command.Parameters[\"@textBox2\"].Value = TextBox2.Text;\n>\n> mySql.Command.ExecuteNonQuery();\n>\n> * mySql.Command.Dispose();\n> mySql.Connection.Close();\n> mySql.CloseConn();\n>\n>\n> Hi i have hilighted the line in which I wanted to ask doubts .\n>\n> Currently i am inserting one row in one time and then executing the query .\n>\n> So with this approach i need to execute it many times for multiple rows\n> insert because of this my database is poor in doing this each time for very\n> large data.\n>\n> What i want here is to insert multiple rows and then executing it in one\n> time only so that it will be faster.\n>\n> Please help me out in this regards .\n>\n> --\n> Thanks,\n> Keshav Upadhyaya\n>\n\n\n\n-- \nLets call it Postgres\n\nEnterpriseDB http://www.enterprisedb.com\n\ngurjeet[.singh]@EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com\nTwitter: singh_gurjeet\nSkype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nTry the multi-row INSERT syntax:From the docs: To insert multiple rows using the multirow VALUES syntax:INSERT INTO films (code, title, did, date_prod, kind) VALUES    ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),\n\n    ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');Best regards,On Thu, Oct 8, 2009 at 12:09 PM, keshav upadhyaya <[email protected]> wrote:\nHi , I want to insert multiple Rows in one shot to improve my performance . \n\n From C# code I am using ADO .net to connect to postgres . Currently i am pasting the code which is not of postgres but in my dev environment similar things  i am doing with Postgres. \n MySqlConnection mySql = new MySqlConnection();\n        mySql.CreateConn();        mySql.Command = mySql.Connection.CreateCommand();\n        mySql.Command.CommandText = \"INSERT INTO dbo.table1 (textBox1, textBox2) VALUES (@textBox1, @textBox2)\";\n                mySql.Command.Parameters.Add(\"@textBox1\", SqlDbType.VarChar);\n        mySql.Command.Parameters[\"@textBox1\"].Value = TextBox1.Text;        mySql.Command.Parameters.Add(\"@textBox2\", SqlDbType.VarChar);\n        mySql.Command.Parameters[\"@textBox2\"].Value = TextBox2.Text;        mySql.Command.ExecuteNonQuery();              \n         mySql.Command.Dispose();\n        mySql.Connection.Close();        mySql.CloseConn();\nHi i have hilighted the line in  which I wanted to ask doubts . Currently i am inserting one row in one time and then executing the query . So with this approach i need to execute it many times for multiple rows insert because of this my database is poor in doing this each time for very large data. \nWhat i want here is to insert multiple rows and then executing it in one time only so that it will be faster. Please help me out in this regards . -- Thanks,Keshav Upadhyaya\n-- Lets call it PostgresEnterpriseDB      http://www.enterprisedb.comgurjeet[.singh]@EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.comTwitter: singh_gurjeetSkype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Thu, 8 Oct 2009 12:32:06 +0530", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding mulitple rows insert in one shot using ADO\n\t.net connected to postgres" }, { "msg_contents": "Hi Gurjeet ,\nTHanks for ur help , But from ADO .net we have to go through the code which\ni have written in RED .\n\nIf I am using direct sql command then I can do the way u have suggested .\n\nBut from the ADO .net i need to get the *CommandText and then add the\nvalues of each parameter after that\nexecute Query .\n\nWhat I want here is I will load the multi row data as parameter and then\nexecute Query only once .\n\nThanks\nkeshav\n\n\n*\nOn Thu, Oct 8, 2009 at 12:32 PM, Gurjeet Singh <[email protected]>wrote:\n\n> Try the multi-row INSERT syntax:\n>\n> From the docs:\n>\n> To insert multiple rows using the multirow VALUES syntax:\n>\n> INSERT INTO films (code, title, did, date_prod, kind) VALUES\n> ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),\n> ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');\n>\n>\n> Best regards,\n>\n>\n> On Thu, Oct 8, 2009 at 12:09 PM, keshav upadhyaya <[email protected]>wrote:\n>\n>> Hi ,\n>>\n>> I want to insert multiple Rows in one shot to improve my performance .\n>>\n>> From C# code I am using ADO .net to connect to postgres .\n>> Currently i am pasting the code which is not of postgres but in my dev\n>> environment similar things i am doing with Postgres.\n>>\n>> MySqlConnection mySql = new MySqlConnection();\n>> mySql.CreateConn();\n>> mySql.Command = mySql.Connection.CreateCommand();\n>> * mySql.Command.CommandText = \"INSERT INTO dbo.table1 (textBox1,\n>> textBox2) VALUES (@textBox1, @textBox2)\";\n>>\n>> mySql.Command.Parameters.Add(\"@textBox1\", SqlDbType.VarChar);\n>> mySql.Command.Parameters[\"@textBox1\"].Value = TextBox1.Text;\n>> mySql.Command.Parameters.Add(\"@textBox2\", SqlDbType.VarChar);\n>> mySql.Command.Parameters[\"@textBox2\"].Value = TextBox2.Text;\n>>\n>> mySql.Command.ExecuteNonQuery();\n>>\n>> * mySql.Command.Dispose();\n>> mySql.Connection.Close();\n>> mySql.CloseConn();\n>>\n>>\n>> Hi i have hilighted the line in which I wanted to ask doubts .\n>>\n>> Currently i am inserting one row in one time and then executing the query\n>> .\n>> So with this approach i need to execute it many times for multiple rows\n>> insert because of this my database is poor in doing this each time for very\n>> large data.\n>>\n>> What i want here is to insert multiple rows and then executing it in one\n>> time only so that it will be faster.\n>>\n>> Please help me out in this regards .\n>>\n>> --\n>> Thanks,\n>> Keshav Upadhyaya\n>>\n>\n>\n>\n> --\n> Lets call it Postgres\n>\n> EnterpriseDB http://www.enterprisedb.com\n>\n> gurjeet[.singh]@EnterpriseDB.com\n>\n> singh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com\n> Twitter: singh_gurjeet\n> Skype: singh_gurjeet\n>\n> Mail sent from my BlackLaptop device\n>\n\n\n\n-- \nThanks,\nKeshav Upadhyaya\n\nHi Gurjeet , THanks for ur help , But from ADO .net we have to go through the code which i have written in RED . If I am using direct sql command then I can do the way u have suggested . But from the ADO .net i need to get the CommandText  and then add the values of each parameter after that \nexecute Query . What I want here is I will load the multi row data as parameter and then execute Query only once .Thanks keshav  \n On Thu, Oct 8, 2009 at 12:32 PM, Gurjeet Singh <[email protected]> wrote:\nTry the multi-row INSERT syntax:From the docs: To insert multiple rows using the multirow VALUES syntax:INSERT INTO films (code, title, did, date_prod, kind) VALUES    ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),\n\n\n    ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');Best regards,On Thu, Oct 8, 2009 at 12:09 PM, keshav upadhyaya <[email protected]> wrote:\nHi , I want to insert multiple Rows in one shot to improve my performance . \n\n\n>From C# code I am using ADO .net to connect to postgres . Currently i am pasting the code which is not of postgres but in my dev environment similar things  i am doing with Postgres. \n MySqlConnection mySql = new MySqlConnection();\n        mySql.CreateConn();        mySql.Command = mySql.Connection.CreateCommand();\n        mySql.Command.CommandText = \"INSERT INTO dbo.table1 (textBox1, textBox2) VALUES (@textBox1, @textBox2)\";\n                mySql.Command.Parameters.Add(\"@textBox1\", SqlDbType.VarChar);\n        mySql.Command.Parameters[\"@textBox1\"].Value = TextBox1.Text;        mySql.Command.Parameters.Add(\"@textBox2\", SqlDbType.VarChar);\n        mySql.Command.Parameters[\"@textBox2\"].Value = TextBox2.Text;        mySql.Command.ExecuteNonQuery();              \n         mySql.Command.Dispose();\n        mySql.Connection.Close();        mySql.CloseConn();\nHi i have hilighted the line in  which I wanted to ask doubts . Currently i am inserting one row in one time and then executing the query . So with this approach i need to execute it many times for multiple rows insert because of this my database is poor in doing this each time for very large data. \nWhat i want here is to insert multiple rows and then executing it in one time only so that it will be faster. Please help me out in this regards . -- Thanks,Keshav Upadhyaya\n-- Lets call it PostgresEnterpriseDB      http://www.enterprisedb.comgurjeet[.singh]@EnterpriseDB.com\n\n\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.comTwitter: singh_gurjeetSkype: singh_gurjeetMail sent from my BlackLaptop device\n\n-- Thanks,Keshav Upadhyaya", "msg_date": "Thu, 8 Oct 2009 12:59:46 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding mulitple rows insert in one shot using ADO\n\t.net connected to postgres" } ]
[ { "msg_contents": "HiI have a database and ~150 clients non-stop writing to this database quite\nbig pieces of text.\nI have a performacne problem so I tried to increase log level, so I could\nsee which queries take most time.\nMy postgresql.conf (Log section) is:\n\nlog_destination = 'stderr'\nlogging_collector = on\nlog_rotation_size = 1GB\nlog_connections = on\nlog_line_prefix = '%m %p %u %d %r '\nlog_lock_waits = on\nlog_statement = 'ddl'\nlog_temp_files = 4096\n\nAnd I got the query times + query parameters values, which makes my log\nextremly big.\nHow can I set the logging parameters to write query + duration time but\nwithout parameter values?\n\nThanks\nLudwik\n\n\n\n\n\n-- \nLudwik Dyląg\n\nHiI have a database and ~150 clients non-stop writing to this database quite big pieces of text.I have a performacne problem so I tried to increase log level, so I could see which queries take most time.\nMy postgresql.conf (Log section) is:log_destination = 'stderr'logging_collector = onlog_rotation_size = 1GBlog_connections = onlog_line_prefix = '%m %p %u %d %r '                     \nlog_lock_waits = on   log_statement = 'ddl' log_temp_files = 4096 And I got the query times + query parameters values, which makes my log extremly big.\nHow can I set the logging parameters to write query + duration time but without parameter values?Thanks Ludwik \n-- Ludwik Dyląg", "msg_date": "Thu, 8 Oct 2009 11:19:26 +0200", "msg_from": "Ludwik Dylag <[email protected]>", "msg_from_op": true, "msg_subject": "Query logging time, not values" }, { "msg_contents": "In other SQL engines that I've used, it is recommended that the columns that\nare used in various indexes be placed at the beginning of a row since at\nsome point (depending on the engine and/or pagesize) wide rows could end up\non other pages.� From a performance standpoint on large tables this makes a\nbig difference.� Is the same true with Postgres.� Should I try and make sure\nthat my indexes fit in the first 8192 bytes?\n\n\n�\n\n\nBes Regards\n\n\n--\nMichael Gould, Managing Partner\nIntermodal Software Solutions, LLC\n904.226.0978\n904.592.5250 fax\n\n\nIn other SQL engines that I've used, it is recommended that the columns that are used in various indexes be placed at the beginning of a row since at some point (depending on the engine and/or pagesize) wide rows could end up on other pages.  From a performance standpoint on large tables this makes a big difference.  Is the same true with Postgres.  Should I try and make sure that my indexes fit in the first 8192 bytes?\n \nBes RegardsMichael Gould, Managing Partner\nIntermodal Software Solutions, LLC\n904.226.0978\n904.592.5250 fax", "msg_date": "Thu, 8 Oct 2009 08:54:07 -0500", "msg_from": "Michael Gould <[email protected]>", "msg_from_op": false, "msg_subject": "position in DDL of columns used in indexes" }, { "msg_contents": "On Thu, 8 Oct 2009, Michael Gould wrote:\n> In other SQL engines that I've used, it is recommended that the columns that are used in\n> various indexes be placed at the beginning of a row since at some point (depending on the\n> engine and/or pagesize) wide rows could end up on other pages.  From a performance\n> standpoint on large tables this makes a big difference.  Is the same true with Postgres. \n> Should I try and make sure that my indexes fit in the first 8192 bytes?\n\nInteresting question. AFAIK (I'm not an expert, someone correct me):\n\nPostgres does not split rows across multiple pages, so this should never \nbe a concern. When a row is too big for a page, Postgres will select the \nlarger of the columns from the row and compress them. If that fails to \nbring the row size down, then Postgres will select the larger columns and \nremove them to a separate storage area, and leave just the references in \nthe actual row. Therefore, the order of columns should not matter.\n\nMoreover, whether a row is used in an index should not make any \ndifference. The index stores the values too, right? Postgres will look up \nin the index, and then fetch the rows, in two separate operations.\n\nMatthew\n\n-- \n Let's say I go into a field and I hear \"baa baa baa\". Now, how do I work \n out whether that was \"baa\" followed by \"baa baa\", or if it was \"baa baa\"\n followed by \"baa\"?\n - Computer Science Lecturer", "msg_date": "Thu, 8 Oct 2009 15:36:50 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: position in DDL of columns used in indexes" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Postgres does not split rows across multiple pages, so this should never \n> be a concern. When a row is too big for a page, Postgres will select the \n> larger of the columns from the row and compress them. If that fails to \n> bring the row size down, then Postgres will select the larger columns and \n> remove them to a separate storage area, and leave just the references in \n> the actual row. Therefore, the order of columns should not matter.\n\n> Moreover, whether a row is used in an index should not make any \n> difference. The index stores the values too, right? Postgres will look up \n> in the index, and then fetch the rows, in two separate operations.\n\nYeah. There can be a small performance advantage to putting the more\nfrequently accessed columns first (so you don't have to skip over other\ncolumns to get to them). This has nothing directly to do with whether\nthey are indexed, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Oct 2009 11:08:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: position in DDL of columns used in indexes " } ]
[ { "msg_contents": "We have been using partitioning for some time with great success. Up \nuntil now our usage has not included ordering and now that we are trying \nto use an ORDER BY against an indexed column a rather significant \nshortcoming seems to be kicking in.\n\nParent table (have cut all but 4 columns to make it easier to post about)\nCREATE TABLE people\n(\n person_id character varying(36) NOT NULL,\n list_id character varying(36) NOT NULL,\n first_name character varying(255),\n last_name character varying(255),\n CONSTRAINT people_pkey (person_id, list_id)\n);\n\nA partition looks like this:\nCREATE TABLE people_list1\n(\n -- inherited columns omitted\n CONSTRAINT people_list1_list_id_check CHECK (list_id::text = \n'the_unique_list_id'::text)\n)\nINHERITS (people);\n\nBoth the parent and the children have indexes on all 4 columns mentioned \nabove. The parent table is completely empty.\n\nIf I run this query, directly against the partition, performance is \nexcellent:\nselect * from people_list1 order by first_name asc limit 50;\n\nThe explain analyze output:\n Limit (cost=0.00..4.97 rows=50 width=34315) (actual \ntime=49.616..522.464 rows=50 loops=1)\n -> Index Scan using idx_people_first_name_list1 on people_list1 \n(cost=0.00..849746.98 rows=8544854 width=34315) (actual \ntime=49.614..522.424 rows=50 loops=1)\n Total runtime: 522.773 ms\n\nIf I run this query, against the parent, performance is terrible:\nselect * from people where list_id = 'the_unique_list_id' order by \nfirst_name asc limit 50;\n\nThe explain analyze output:\n Limit (cost=726844.88..726845.01 rows=50 width=37739) (actual \ntime=149864.869..149864.884 rows=50 loops=1)\n -> Sort (cost=726844.88..748207.02 rows=8544855 width=37739) \n(actual time=149864.868..149864.876 rows=50 loops=1)\n Sort Key: public.people.first_name\n Sort Method: top-N heapsort Memory: 50kB\n -> Result (cost=0.00..442990.94 rows=8544855 width=37739) \n(actual time=0.081..125837.332 rows=8545138 loops=1)\n -> Append (cost=0.00..442990.94 rows=8544855 \nwidth=37739) (actual time=0.079..111103.743 rows=8545138 loops=1)\n -> Index Scan using people_pkey on people \n(cost=0.00..4.27 rows=1 width=37739) (actual time=0.008..0.008 rows=0 \nloops=1)\n Index Cond: ((list_id)::text = \n'the_unique_list_id'::text)\n -> Seq Scan on people_list1 people \n(cost=0.00..442986.67 rows=8544854 width=34315) (actual \ntime=0.068..109781.308 rows=8545138 loops=1)\n Filter: ((list_id)::text = \n'the_unique_list_id'::text)\n Total runtime: 149865.411 ms\n\nJust to show that partitions are setup correctly, this query also has \nexcellent performance:\nselect * from people where list_id = 'the_unique_list_id' and first_name \n= 'JOE';\n\nHere is the explain analyze for that:\n Result (cost=0.00..963.76 rows=482 width=37739) (actual \ntime=6.031..25.394 rows=2319 loops=1)\n -> Append (cost=0.00..963.76 rows=482 width=37739) (actual \ntime=6.029..21.340 rows=2319 loops=1)\n -> Index Scan using idx_people_first_name on people \n(cost=0.00..4.27 rows=1 width=37739) (actual time=0.010..0.010 rows=0 \nloops=1)\n Index Cond: ((first_name)::text = 'JOE'::text)\n Filter: ((list_id)::text = 'the_unique_list_id'::text)\n -> Bitmap Heap Scan on people_list1 people \n(cost=8.47..959.49 rows=481 width=34315) (actual time=6.018..20.968 \nrows=2319 loops=1)\n Recheck Cond: ((first_name)::text = 'JOE'::text)\n Filter: ((list_id)::text = 'the_unique_list_id'::text)\n -> Bitmap Index Scan on idx_people_first_name_list1 \n(cost=0.00..8.35 rows=481 width=0) (actual time=5.566..5.566 rows=2319 \nloops=1)\n Index Cond: ((first_name)::text = 'JOE'::text)\n Total runtime: 25.991 ms\n\n\nThis is Postgres 8.3.7 on the 2.6.28 kernel with constraint_exclusion \non. Our partitions are in the 8 - 15 million row range.\n\nI realize one option is to hit the partition directly instead of hitting \nthe parent table with the check constraint in the WHERE clause, but up \nuntil now we have been able to avoid needing partition-awareness in our \ncode. Perhaps we have hit upon something that will require breaking \nthat cleanliness but wanted to see if there were any workarounds.\n", "msg_date": "Thu, 08 Oct 2009 11:33:00 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned Tables and ORDER BY" }, { "msg_contents": "We have similar problem and now we are try to find solution. When you\nexecute query on partion there is no sorting - DB use index to\nretrieve data and if you need let say 50 rows it reads 50 rows using\nindex. But when you execute on parent table query optymizer do this:\n\n -> Sort (cost=726844.88..748207.02 rows=8544855 width=37739)\n(actual time=149864.868..149864.876 rows=50 loops=1)\n\nit means 8544855 rows should be sorted and it takes long minutes. We\nhave simpler situation than you and I will try to find solution\ntommorow :)\n\nMichal Szymanski\nhttp://blog.szymanskich.net\nhttp://techblog.freeconet.pl/\n", "msg_date": "Sun, 11 Oct 2009 07:30:33 -0700 (PDT)", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "I've described our problem here\nhttp://groups.google.pl/group/pgsql.performance/browse_thread/thread/54a7419381bd1565?hl=pl#\n Michal Szymanski\nhttp://blog.szymanskich.net\nhttp://techblog.freeconet.pl/\n\n", "msg_date": "Mon, 12 Oct 2009 07:25:04 -0700 (PDT)", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "This seems like a pretty major weakness in PostgreSQL partitioning. I \nhave essentially settled on not being able to do queries against the \nparent table when I want to order the results. Going to have to use a \nHibernate interceptor or something similar to rewrite the statements so \nthey hit specific partitions, will be working on this in the coming week.\n\nThis weakness is a bummer though as it makes partitions a lot less \nuseful. Having to hit specific child tables by name isn't much \ndifferent than just creating separate tables and not using partitions at \nall.\n\nMichal Szymanski wrote:\n> I've described our problem here\n> http://groups.google.pl/group/pgsql.performance/browse_thread/thread/54a7419381bd1565?hl=pl#\n> Michal Szymanski\n> http://blog.szymanskich.net\n> http://techblog.freeconet.pl/\n>\n>\n> \n", "msg_date": "Sun, 18 Oct 2009 08:24:39 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]>wrote:\n\n> We have similar problem and now we are try to find solution. When you\n> execute query on partion there is no sorting - DB use index to\n> retrieve data and if you need let say 50 rows it reads 50 rows using\n> index. But when you execute on parent table query optymizer do this:\n>\n> -> Sort (cost=726844.88..748207.02 rows=8544855 width=37739)\n> (actual time=149864.868..149864.876 rows=50 loops=1)\n>\n> it means 8544855 rows should be sorted and it takes long minutes.\n\nThe figures in first parenthesis are estimates, not the actual row count.\nIf you think it is too low, increase statistic target for that column.\n\nWe\n> have simpler situation than you and I will try to find solution\n> tommorow :)\n>\n> Michal Szymanski\n> http://blog.szymanskich.net\n> http://techblog.freeconet.pl/\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nGJ\n\nOn Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]> wrote:\nWe have similar problem and now we are try to find solution. When you\nexecute query on partion there is no sorting - DB use index to\nretrieve data and if you need let say 50 rows it reads 50 rows using\nindex. But when you execute on parent table query optymizer do this:\n\n  ->  Sort  (cost=726844.88..748207.02 rows=8544855 width=37739)\n(actual time=149864.868..149864.876 rows=50 loops=1)\n\nit means 8544855 rows should be sorted and it takes long minutes. The figures in first parenthesis are estimates, not the actual row count. If you think it is too low, increase statistic target for that column. \nWe\nhave simpler situation than you and I will try to find solution\ntommorow :)\n\nMichal Szymanski\nhttp://blog.szymanskich.net\nhttp://techblog.freeconet.pl/\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- GJ", "msg_date": "Mon, 19 Oct 2009 09:24:45 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]>\n> wrote:\n>>\n>> We have similar problem and now we are try to find solution. When you\n>> execute query on partion there is no sorting - DB use index to\n>> retrieve data and if you need let say 50 rows it reads 50 rows using\n>> index. But when you execute on parent table query optymizer do this:\n>>\n>>  ->  Sort  (cost=726844.88..748207.02 rows=8544855 width=37739)\n>> (actual time=149864.868..149864.876 rows=50 loops=1)\n>>\n>> it means 8544855 rows should be sorted and it takes long minutes.\n>\n> The figures in first parenthesis are estimates, not the actual row count.\n> If you think it is too low, increase statistic target for that column.\n\nIt's true that the figures in parentheses are estimates, it's usually\nbad when the estimated and actual row counts are different by 5 orders\nof magnitude, and that large of a difference is not usually fixed by\nincreasing the statistics target.\n\n...Robert\n", "msg_date": "Mon, 19 Oct 2009 12:08:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "2009/10/19 Robert Haas <[email protected]>\n\n> 2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n> >\n> >\n> > On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]>\n> > wrote:\n> >>\n> >> We have similar problem and now we are try to find solution. When you\n> >> execute query on partion there is no sorting - DB use index to\n> >> retrieve data and if you need let say 50 rows it reads 50 rows using\n> >> index. But when you execute on parent table query optymizer do this:\n> >>\n> >> -> Sort (cost=726844.88..748207.02 rows=8544855 width=37739)\n> >> (actual time=149864.868..149864.876 rows=50 loops=1)\n> >>\n> >> it means 8544855 rows should be sorted and it takes long minutes.\n> >\n> > The figures in first parenthesis are estimates, not the actual row count.\n> > If you think it is too low, increase statistic target for that column.\n>\n> It's true that the figures in parentheses are estimates, it's usually\n> bad when the estimated and actual row counts are different by 5 orders\n> of magnitude, and that large of a difference is not usually fixed by\n> increasing the statistics target.\n>\n> I thought that this means, that either analyze was running quite a long\ntime ago, or that the value didn't made it to histogram. In the later case,\nthat's mostly case when your statistic target is low, or that the value is\nreally 'rare'.\n\n\n\n-- \nGJ\n\n2009/10/19 Robert Haas <[email protected]>\n2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]>\n> wrote:\n>>\n>> We have similar problem and now we are try to find solution. When you\n>> execute query on partion there is no sorting - DB use index to\n>> retrieve data and if you need let say 50 rows it reads 50 rows using\n>> index. But when you execute on parent table query optymizer do this:\n>>\n>>  ->  Sort  (cost=726844.88..748207.02 rows=8544855 width=37739)\n>> (actual time=149864.868..149864.876 rows=50 loops=1)\n>>\n>> it means 8544855 rows should be sorted and it takes long minutes.\n>\n> The figures in first parenthesis are estimates, not the actual row count.\n> If you think it is too low, increase statistic target for that column.\n\nIt's true that the figures in parentheses are estimates, it's usually\nbad when the estimated and actual row counts are different by 5 orders\nof magnitude, and that large of a difference is not usually fixed by\nincreasing the statistics target.\nI thought that this means, that either analyze was running quite a long time ago, or that the value didn't made it to histogram. In the later case, that's mostly case when your statistic target is low, or that the value is really 'rare'.\n -- GJ", "msg_date": "Mon, 19 Oct 2009 17:13:38 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "Joe Uhl wrote:\n> This seems like a pretty major weakness in PostgreSQL partitioning. I \n> have essentially settled on not being able to do queries against the \n> parent table when I want to order the results. Going to have to use a \n> Hibernate interceptor or something similar to rewrite the statements so \n> they hit specific partitions, will be working on this in the coming week.\n> \n> This weakness is a bummer though as it makes partitions a lot less \n> useful. Having to hit specific child tables by name isn't much \n> different than just creating separate tables and not using partitions at \n> all.\n\nI wonder if the \"offset 0\" trick would work here? I was told (for a different question) that the planner can't merge levels if there's an offset or limit on a subquery. So you might be able to do something like this:\n\n select ... from (select ... offset 0) as foo order by ...\n\nIn other words, put your primary query as a sub-select without the sort criterion, with the \"offset 0\" as a sort of roadblock that the planner can't get past. Then the outer select does the sorting, without affecting the plan for the inner select.\n\nCraig\n", "msg_date": "Mon, 19 Oct 2009 10:16:27 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> 2009/10/19 Robert Haas <[email protected]>\n>>\n>> 2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n>> >\n>> >\n>> > On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]>\n>> > wrote:\n>> >>\n>> >> We have similar problem and now we are try to find solution. When you\n>> >> execute query on partion there is no sorting - DB use index to\n>> >> retrieve data and if you need let say 50 rows it reads 50 rows using\n>> >> index. But when you execute on parent table query optymizer do this:\n>> >>\n>> >>  ->  Sort  (cost=726844.88..748207.02 rows=8544855 width=37739)\n>> >> (actual time=149864.868..149864.876 rows=50 loops=1)\n>> >>\n>> >> it means 8544855 rows should be sorted and it takes long minutes.\n>> >\n>> > The figures in first parenthesis are estimates, not the actual row\n>> > count.\n>> > If you think it is too low, increase statistic target for that column.\n>>\n>> It's true that the figures in parentheses are estimates, it's usually\n>> bad when the estimated and actual row counts are different by 5 orders\n>> of magnitude, and that large of a difference is not usually fixed by\n>> increasing the statistics target.\n>>\n> I thought that this means, that either analyze was running quite a long time\n> ago, or that the value didn't made it to histogram. In the later case,\n> that's mostly case when your statistic target is low, or that the value is\n> really 'rare'.\n\nIt's possible, but (1) most people are running autovacuum these days,\nin which case this isn't likely to occur and (2) most people do not\nmanage to expand the size of a table by five orders of magnitude\nwithout analyzing it. Generally these kinds of problems come from bad\nselectivity estimates.\n\nIn this case, though, I think that the actual number is less than the\nestimate because of the limit node immediately above. The problem is\njust that a top-N heapsort requires scanning the entire set of rows,\nand scanning 8 million rows is slow.\n\n...Robert\n", "msg_date": "Mon, 19 Oct 2009 15:10:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" }, { "msg_contents": "\n\nOn 10/19/09 12:10 PM, \"Robert Haas\" <[email protected]> wrote:\n\n> 2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n>> \n>> \n>> 2009/10/19 Robert Haas <[email protected]>\n>>> \n>>> 2009/10/19 Grzegorz Jaśkiewicz <[email protected]>:\n>>>> \n>>>> \n>>>> On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski <[email protected]>\n>>>> wrote:\n>>>>> \n>>>>> We have similar problem and now we are try to find solution. When you\n>>>>> execute query on partion there is no sorting - DB use index to\n>>>>> retrieve data and if you need let say 50 rows it reads 50 rows using\n>>>>> index. But when you execute on parent table query optymizer do this:\n>>>>> \n>>>>>  ->  Sort  (cost=726844.88..748207.02 rows=8544855 width=37739)\n>>>>> (actual time=149864.868..149864.876 rows=50 loops=1)\n>>>>> \n>>>>> it means 8544855 rows should be sorted and it takes long minutes.\n>>>> \n>>>> The figures in first parenthesis are estimates, not the actual row\n>>>> count.\n>>>> If you think it is too low, increase statistic target for that column.\n>>> \n>>> It's true that the figures in parentheses are estimates, it's usually\n>>> bad when the estimated and actual row counts are different by 5 orders\n>>> of magnitude, and that large of a difference is not usually fixed by\n>>> increasing the statistics target.\n>>> \n>> I thought that this means, that either analyze was running quite a long time\n>> ago, or that the value didn't made it to histogram. In the later case,\n>> that's mostly case when your statistic target is low, or that the value is\n>> really 'rare'.\n> \n> It's possible, but (1) most people are running autovacuum these days,\n> in which case this isn't likely to occur and (2) most people do not\n> manage to expand the size of a table by five orders of magnitude\n> without analyzing it. Generally these kinds of problems come from bad\n> selectivity estimates.\n> \n\nAlso, with partitioning the \"combined\" statistics of multiple tables is just\nplain wrong much of the time. It makes some worst case assumptions about\nthe number of distinct values when merging multiple table results (even with\n100% overlap and all unique values in the stats columns), and at least in\n8.3 (haven't looked in 8.4) the row width estimate is the max of all the\nchild tables, not an average or weighted average. So even with 100% perfect\nstatistics on each individual table, do a scan over a few dozen partitions\n(or a couple hundred) and the summary stats can be way off. The tendency is\nto sometimes significantly overestimate the number of distinct values.\n\n\n\n> In this case, though, I think that the actual number is less than the\n> estimate because of the limit node immediately above. The problem is\n> just that a top-N heapsort requires scanning the entire set of rows,\n> and scanning 8 million rows is slow.\n> \n> ...Robert\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 22 Oct 2009 15:08:00 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables and ORDER BY" } ]
[ { "msg_contents": "Hey all, it's been a bit however I'm running into some issues with my\nconcurrent index\n\n\nAlways get this error during a concurrent index.\n\n\n\n*2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706) ERROR:\ndeadlock detected*\n\n*2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706)\nDETAIL: Process 20939 waits for ShareLock on virtual transaction\n16/43817381; blocked by process 1874.*\n\n* Process 1874 waits for ExclusiveLock on relation 17428 of database\n16384; blocked by process 20939.*\n\n*2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706)\nSTATEMENT: CREATE INDEX CONCURRENTLY prc_temp_idx_impressions_log_date2 ON\ntracking.impressions USING btree (log_date) TABLESPACE trackingindexspace*\n\n\n\nThis happens all the time, so it's not the occasional deadlock. We even\nturned off all applications that insert into the database and it still\nfails.\n\nTried restarting the database as well.\n\nAlso when looking at active connections there is no process 1874.\n\n\nSo I'm at a lost, this first started happening in my slave DB (Slon\nreplication), but it is now happening on my master which is odd.\n\n\nAny idea?\n\npostgres 8.3.4\n\nLinux system.\n\nHey all, it's been a bit however I'm running into some issues with my concurrent indexAlways get this error during a concurrent index.\n \n2009-10-07 22:18:02 PDT admissionclsdb postgres \n10.13.200.70(46706) ERROR:  deadlock detected\n2009-10-07 22:18:02 PDT admissionclsdb postgres \n10.13.200.70(46706) DETAIL:  Process 20939 waits for ShareLock on virtual \ntransaction 16/43817381; blocked by process 1874.\n        Process 1874 waits for ExclusiveLock on relation \n17428 of database 16384; blocked by process 20939.\n2009-10-07 22:18:02 PDT admissionclsdb postgres \n10.13.200.70(46706) STATEMENT:  CREATE INDEX CONCURRENTLY  \nprc_temp_idx_impressions_log_date2 ON tracking.impressions USING btree \n(log_date) TABLESPACE trackingindexspace\n \nThis happens all the time, so it's not the occasional \ndeadlock. We even turned off all applications that insert into the database and \nit still fails. \nTried restarting the database as well. \nAlso when looking at active connections there is no process \n1874.So I'm at a lost, this first started happening in my slave DB (Slon replication), but it is now happening on my master which is odd.\nAny idea? postgres 8.3.4Linux system.", "msg_date": "Thu, 8 Oct 2009 10:27:30 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "concurrent reindex issues" }, { "msg_contents": "Tory M Blue <[email protected]> writes:\n> *2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706) ERROR:\n> deadlock detected*\n\n> *2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706)\n> DETAIL: Process 20939 waits for ShareLock on virtual transaction\n> 16/43817381; blocked by process 1874.*\n\n> * Process 1874 waits for ExclusiveLock on relation 17428 of database\n> 16384; blocked by process 20939.*\n\n> *2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706)\n> STATEMENT: CREATE INDEX CONCURRENTLY prc_temp_idx_impressions_log_date2 ON\n> tracking.impressions USING btree (log_date) TABLESPACE trackingindexspace*\n\nHmm. I suppose that 20939 was running the CREATE INDEX CONCURRENTLY,\nand what it's trying to do with the ShareLock on a VXID is wait for some\nother transaction to terminate so that it can safely complete the index\ncreation (because the index might be invalid from the point of view of\nthat other transaction). But the other transaction is waiting for\nExclusiveLock on what I assume is the table being indexed (did you check\nwhat relation that OID is?).\n\nAFAIK there are no built-in operations that take ExclusiveLock on user\ntables, which means that 1874 would have had to be issuing an explicit\n\tLOCK TABLE tracking.impressions IN EXCLUSIVE MODE\ncommand. Perhaps that will help you track down what it was.\n\n> So I'm at a lost, this first started happening in my slave DB (Slon\n> replication), but it is now happening on my master which is odd.\n\nI wouldn't be too surprised if the LOCK is coming from some Slony\noperation or other. You might want to ask the slony hackers about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Oct 2009 13:55:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent reindex issues " }, { "msg_contents": "On Thu, Oct 8, 2009 at 11:55 AM, Tom Lane <[email protected]> wrote:\n> I wouldn't be too surprised if the LOCK is coming from some Slony\n> operation or other.  You might want to ask the slony hackers about it.\n\nI've had issues like this. Shutting down the slon daemons before\nrunning such commands would usually allow them to finish, at the cost\nof replication falling behind while it runs.\n", "msg_date": "Thu, 8 Oct 2009 14:04:55 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent reindex issues" }, { "msg_contents": "More update\n\nIf I run the concurrent re index locally (psql session) it works fine, but\nwhen run via a connection through php I get the error\n\nCan't be slon, since I can do this locally, but why would postgres have an\nissue with a remote connection?\n\nthe basic script:\n $connectString = \"host=server dbname=clsdb user=postgres\npassword=password\";\n $dbconn = pg_connect($connectString);\n if (!$dbconn) {\n print \"Could not connect\";\n die();\n }\n $result = pg_query( $dbconn, \"CREATE INDEX CONCURRENTLY\nprc_temp_idx_impressions_log_date ON tracking.impressions USING btree\n(log_date) TABLESPACE trackingindexspace;\" );\n\nError about a lock:\n\n*2009-10-07 22:18:02 PDT clsdb postgres 10.13.200.70(46706) ERROR: deadlock\ndetected*\n\n*2009-10-07 22:18:02 PDT clsdb postgres 10.13.200.70(46706) DETAIL: Process\n20939 waits for ShareLock on virtual transaction 16/43817381; blocked by\nprocess 1874.*\n\n* Process 1874 waits for ExclusiveLock on relation 17428 of database\n16384; blocked by process 20939.*\n\n*2009-10-07 22:18:02 PDT clsdb postgres 10.13.200.70(46706) STATEMENT:\nCREATE INDEX CONCURRENTLY prc_temp_idx_impressions_log_date2 ON\ntracking.impressions USING btree (log_date) TABLESPACE trackingindexspace*\n\n\n*Thanks*\n\n*Tory\n*\n\nMore updateIf I run the concurrent re index locally (psql session) it works fine, but when run via a connection through php I get the errorCan't be slon, since I can do this locally, but why would postgres have an issue with a remote connection?\nthe basic script:  $connectString = \"host=server dbname=clsdb user=postgres password=password\";         $dbconn = pg_connect($connectString);         if (!$dbconn) {                 print \"Could not connect\"; \n                die();         }           $result = pg_query( $dbconn, \"CREATE INDEX CONCURRENTLY  prc_temp_idx_impressions_log_date ON tracking.impressions USING btree (log_date) TABLESPACE trackingindexspace;\" ); \nError about a lock:2009-10-07 22:18:02 PDT clsdb postgres \n10.13.200.70(46706) ERROR:  deadlock detected\n2009-10-07 22:18:02 PDT clsdb postgres \n10.13.200.70(46706) DETAIL:  Process 20939 waits for ShareLock on virtual \ntransaction 16/43817381; blocked by process 1874.\n        Process 1874 waits for ExclusiveLock on relation \n17428 of database 16384; blocked by process 20939.\n2009-10-07 22:18:02 PDT clsdb postgres \n10.13.200.70(46706) STATEMENT:  CREATE INDEX CONCURRENTLY  \nprc_temp_idx_impressions_log_date2 ON tracking.impressions USING btree \n(log_date) TABLESPACE trackingindexspaceThanksTory", "msg_date": "Fri, 9 Oct 2009 10:58:06 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: concurrent reindex issues" } ]
[ { "msg_contents": "Hi there\n\nWe are runing Postgres 8.3.7 on a\nWe have a problem with Explain Analyze that we haven't seen before.\n\n we run an Explain Analyze on a query.\n\n Nested Loop (cost=1256.32..2097.31 rows=198 width=1120) (actual\ntime=12.958..20.846 rows=494 loops=1)\n -> HashAggregate (cost=1256.32..1256.92 rows=198 width=4) (actual\ntime=12.936..13.720 rows=494 loops=1)\n -> Limit (cost=1255.53..1255.63 rows=198 width=20) (actual\ntime=9.841..11.901 rows=500 loops=1)\n -> Sort (cost=1255.53..1255.63 rows=198 width=20) (actual\ntime=9.838..10.588 rows=500 loops=1)\n Sort Key: ((abs((ri_metadata.latitude -\n44.0247062::double precision)) + abs((ri_metadata.longitude -\n(-88.5426136)::double precision))))\n Sort Method: quicksort Memory: 52kB\n -> Bitmap Heap Scan on ri_metadata\n(cost=385.54..1254.02 rows=198 width=20) (actual time=4.638..8.558 rows=595\nloops=1)\n Recheck Cond: ((latitude > 43.6687062::double\nprecision) AND (latitude < 44.3807062::double precision) AND (longitude >\n(-88.8986136)::double precision) AND (longitude < (-88.1866136)::double\nprecision))\n Filter: (category_id = ANY\n('{3,274,4,1,2,275,7,278,8,277,5,280,6,279,11,9,10,15,285,16,14,19,18,17,24,23,21,266,25,32,31,30,29,40,34,48,41,44,313,54,53,55,50,52,62,61,63,302,58,57,59,71,341,69,67,338,68,337,65,66,339,79,352,77,78,74,85,83,324,81,334,335,336,372,376,373,374,363,122,368,127,128,356,355,360,359,118,358,357,140,139,138,137,143,142,141,132,130,129,135,134,191,185,186,187,188,183,202,200,193,219}'::integer[]))\n -> Bitmap Index Scan on\nri_metadata_latitude_longitude_category_id_idx (cost=0.00..385.53 rows=462\nwidth=0) (actual time=4.533..4.533 rows=1316 loops=1)\n Index Cond: ((latitude > 43.6687062::double\nprecision) AND (latitude < 44.3807062::double precision) AND (longitude >\n(-88.8986136)::double precision) AND (longitude < (-88.1866136)::double\nprecision))\n -> Index Scan using ri_result_result_id_idx on ri_result\n(cost=0.00..4.24 rows=1 width=1120) (actual time=0.006..0.008 rows=1\nloops=494)\n Index Cond: (ri_result.result_id = ri_metadata.result_id)\n Total runtime: 21.658 ms\n(14 rows)\n\nIt takes only *21* ms. Then we run the same query on psql (on localhost)\nwith timing on\n\n select * from ri_result where result_id in\n (select result_id from ri_metadata\n WHERE category_id IN\n(3,274,4,1,2,275,7,278,8,277,5,280,6,279,11,9,10,15,285,16,14,19,18,17,24,23,21,266,25,32,31,30,29,40,34,48,41,44,313,54,53\n ,55,50,52,62,61,63,302,58,57,59,71,341,69,67,338,68,337,65,66,339,79,352,77,78,74,85,83,324,81,334,335,336,372,376,373,374,363,122,368,127,\n 128,356,355,360,359,118,358,357,140,139,138,137,143,142,141,132,130,129,135,134,191,185,186,187,188,183,202,200,193,219)\nAND latitude >43.668706199999995 AND latitude <44.3807062 AND\nlongitude>-88.89861359999999 AND longitude<-88.1866136\n order by abs(latitude - 44.0247062)+abs(longitude - -88.5426136) limit\n500)\n;\nTime: 2611.491 ms\n\nThe longer runtime from psql console is corroborated when we do same\nqueries via JDBC\nHow can explain-analyze return significantly much faster than other means?\n(I understand that some data is returned from the query to either client end\non psql or to a resultset in jdbc as opposed to a small chunk of data in\nthe form of a plan but still..,)\nBy the way, I run the explain analyze first and then run the query to avoid\nany caching.\n\nOur system : Ubuntu Ubuntu 8.04.3 64 bit, 8GB RAM ,2 GHz single core,\nrunning a vm on an esx server. the database is read-only.\n\n\n\nri_metadata has 1473864 rows, 200MB,\n Table \"public.ri_metadata\"\n Column | Type |\nModifiers\n----------------+-----------------------------+----------------------------------------------------------------------\n ri_metadata_id | integer | not null default\nnextval('ri_metadata_ri_metadata_id_seq'::regclass)\n result_id | integer | not null\n start_time | timestamp without time zone | not null\n end_time | timestamp without time zone | not null\n category_id | integer | not null\n category_name | text | not null\n location_id | integer |\n longitude | double precision |\n latitude | double precision |\n city | text |\n state | text |\n zip | text |\n street_address | text |\nIndexes:\n \"ri_metadata_pkey\" PRIMARY KEY, btree (ri_metadata_id)\n \"ri_metadata_category_id_idx\" btree (category_id)\n \"ri_metadata_category_id_state\" btree (category_id, state)\n \"ri_metadata_end_time_idx\" btree (end_time)\n \"ri_metadata_latitude_idx\" btree (latitude)\n \"ri_metadata_latitude_longitude_category_id_idx\" btree (latitude,\nlongitude, category_id)\n \"ri_metadata_location_id_idx\" btree (location_id)\n \"ri_metadata_longitude_idx\" btree (longitude)\n \"ri_metadata_result_id_idx\" btree (result_id)\n \"ri_metadata_start_time_idx\" btree (start_time)\n \"ri_metadata_state_idx\" btree (state)\n\n\nri_result has 1323061 rows, 3.3GB total size\n\n Table \"public.ri_result\"\n Column | Type | Modifiers\n--------------+------------------+------------------------------------------------------------------\n ri_result_id | integer | not null default\nnextval('ri_result_ri_result_id_seq'::regclass)\n result_id | integer | not null\n facets | bytea | not null\n props | bytea | not null\n random | double precision |\nIndexes:\n \"ri_result_pkey\" PRIMARY KEY, btree (ri_result_id)\n \"ri_result_random_idx\" btree (random)\n \"ri_result_result_id_idx\" btree (result_id)\n\nHi thereWe are runing Postgres 8.3.7 on a \nWe have a problem with Explain Analyze that we haven't seen before. \n \nwe run an Explain Analyze on a query.\n\n Nested Loop  (cost=1256.32..2097.31 rows=198 width=1120) (actual time=12.958..20.846 rows=494 loops=1)\n   ->  HashAggregate  (cost=1256.32..1256.92 rows=198 width=4) (actual time=12.936..13.720 rows=494 loops=1)\n         ->  Limit  (cost=1255.53..1255.63 rows=198 width=20) (actual time=9.841..11.901 rows=500 loops=1)\n               ->  Sort  (cost=1255.53..1255.63 rows=198 width=20) (actual time=9.838..10.588 rows=500 loops=1)\n                     Sort Key: ((abs((ri_metadata.latitude -\n44.0247062::double precision)) + abs((ri_metadata.longitude -\n(-88.5426136)::double precision))))\n                     Sort Method:  quicksort  Memory: 52kB\n                     ->  Bitmap Heap Scan on ri_metadata \n(cost=385.54..1254.02 rows=198 width=20) (actual time=4.638..8.558\nrows=595 loops=1)\n                           Recheck Cond: ((latitude >\n43.6687062::double precision) AND (latitude < 44.3807062::double\nprecision) AND (longitude > (-88.8986136)::double precision) AND\n(longitude < (-88.1866136)::double precision))\n                           Filter: (category_id = ANY\n('{3,274,4,1,2,275,7,278,8,277,5,280,6,279,11,9,10,15,285,16,14,19,18,17,24,23,21,266,25,32,31,30,29,40,34,48,41,44,313,54,53,55,50,52,62,61,63,302,58,57,59,71,341,69,67,338,68,337,65,66,339,79,352,77,78,74,85,83,324,81,334,335,336,372,376,373,374,363,122,368,127,128,356,355,360,359,118,358,357,140,139,138,137,143,142,141,132,130,129,135,134,191,185,186,187,188,183,202,200,193,219}'::integer[]))\n\n                           ->  Bitmap Index Scan on\nri_metadata_latitude_longitude_category_id_idx  (cost=0.00..385.53\nrows=462 width=0) (actual time=4.533..4.533 rows=1316 loops=1)\n                                 Index Cond: ((latitude >\n43.6687062::double precision) AND (latitude < 44.3807062::double\nprecision) AND (longitude > (-88.8986136)::double precision) AND\n(longitude < (-88.1866136)::double precision))\n   ->  Index Scan using ri_result_result_id_idx on ri_result \n(cost=0.00..4.24 rows=1 width=1120) (actual time=0.006..0.008 rows=1\nloops=494)\n         Index Cond: (ri_result.result_id = ri_metadata.result_id)\n Total runtime: 21.658 ms\n(14 rows)\n\nIt takes only *21* ms. Then we run the same query on psql (on localhost) with timing on\n\n select * from ri_result where result_id in\n (select result_id from ri_metadata\n WHERE category_id IN\n(3,274,4,1,2,275,7,278,8,277,5,280,6,279,11,9,10,15,285,16,14,19,18,17,24,23,21,266,25,32,31,30,29,40,34,48,41,44,313,54,53\n ,55,50,52,62,61,63,302,58,57,59,71,341,69,67,338,68,337,65,66,339,79,352,77,78,74,85,83,324,81,334,335,336,372,376,373,374,363,122,368,127,\n 128,356,355,360,359,118,358,357,140,139,138,137,143,142,141,132,130,129,135,134,191,185,186,187,188,183,202,200,193,219)\nAND  latitude >43.668706199999995 AND latitude <44.3807062 AND\nlongitude>-88.89861359999999 AND longitude<-88.1866136\n   order by  abs(latitude  - 44.0247062)+abs(longitude - -88.5426136) limit 500)\n;\nTime: 2611.491 ms\n\nThe longer runtime from psql console is corroborated when we do same  queries via JDBC\nHow can  explain-analyze return significantly much faster than other\nmeans? (I understand that some data is returned from the query to\neither client end on psql or to a resultset in jdbc as opposed to a\nsmall  chunk of data in the form of a plan but still..,)\nBy the way, I run the explain analyze first and then run the query  to avoid any caching.\n\nOur system : Ubuntu Ubuntu 8.04.3 64 bit,  8GB RAM ,2 GHz single core, running a vm on an esx server. the database is read-only.\n\n\n\nri_metadata has 1473864 rows, 200MB, \n                                             Table \"public.ri_metadata\"\n     Column     |            Type             |                              Modifiers\n----------------+-----------------------------+----------------------------------------------------------------------\n ri_metadata_id | integer                     | not null default nextval('ri_metadata_ri_metadata_id_seq'::regclass)\n result_id      | integer                     | not null\n start_time     | timestamp without time zone | not null\n end_time       | timestamp without time zone | not null\n category_id    | integer                     | not null\n category_name  | text                        | not null\n location_id    | integer                     |\n longitude      | double precision            |\n latitude       | double precision            |\n city           | text                        |\n state          | text                        |\n zip            | text                        |\n street_address | text                        |\nIndexes:\n    \"ri_metadata_pkey\" PRIMARY KEY, btree (ri_metadata_id)\n    \"ri_metadata_category_id_idx\" btree (category_id)\n    \"ri_metadata_category_id_state\" btree (category_id, state)\n    \"ri_metadata_end_time_idx\" btree (end_time)\n    \"ri_metadata_latitude_idx\" btree (latitude)\n    \"ri_metadata_latitude_longitude_category_id_idx\" btree (latitude, longitude, category_id)\n    \"ri_metadata_location_id_idx\" btree (location_id)\n    \"ri_metadata_longitude_idx\" btree (longitude)\n    \"ri_metadata_result_id_idx\" btree (result_id)\n    \"ri_metadata_start_time_idx\" btree (start_time)\n    \"ri_metadata_state_idx\" btree (state)\n\n\nri_result has 1323061 rows, 3.3GB total size\n\n                                      Table \"public.ri_result\"\n    Column    |       Type       |                            Modifiers\n--------------+------------------+------------------------------------------------------------------\n ri_result_id | integer          | not null default nextval('ri_result_ri_result_id_seq'::regclass)\n result_id    | integer          | not null\n facets       | bytea            | not null\n props        | bytea            | not null\n random       | double precision |\nIndexes:\n    \"ri_result_pkey\" PRIMARY KEY, btree (ri_result_id)\n    \"ri_result_random_idx\" btree (random)\n    \"ri_result_result_id_idx\" btree (result_id)", "msg_date": "Thu, 8 Oct 2009 16:30:37 -0400", "msg_from": "G B <[email protected]>", "msg_from_op": true, "msg_subject": "Explain Analyze returns faster than psql or JDBC calls." }, { "msg_contents": "G B <[email protected]> writes:\n> How can explain-analyze return significantly much faster than other means?\n\nIf the returned data is large or takes a lot of work to convert to text,\nthis could happen, since EXPLAIN ANALYZE doesn't bother to format the\ntuples for display. How big are those bytea columns, on average?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Oct 2009 17:18:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze returns faster than psql or JDBC calls. " }, { "msg_contents": "You should also keep in mind that JDBC uses prepared statements, so you have\nto explain analyze accordingly.\n\nDave\n\nOn Thu, Oct 8, 2009 at 5:18 PM, Tom Lane <[email protected]> wrote:\n\n> G B <[email protected]> writes:\n> > How can explain-analyze return significantly much faster than other\n> means?\n>\n> If the returned data is large or takes a lot of work to convert to text,\n> this could happen, since EXPLAIN ANALYZE doesn't bother to format the\n> tuples for display. How big are those bytea columns, on average?\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou should also keep in mind that JDBC uses prepared statements, so you have to explain analyze accordingly.DaveOn Thu, Oct 8, 2009 at 5:18 PM, Tom Lane <[email protected]> wrote:\nG B <[email protected]> writes:\n\n> How can  explain-analyze return significantly much faster than other means?\n\nIf the returned data is large or takes a lot of work to convert to text,\nthis could happen, since EXPLAIN ANALYZE doesn't bother to format the\ntuples for display.  How big are those bytea columns, on average?\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 9 Oct 2009 07:57:38 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze returns faster than psql or JDBC calls." } ]
[ { "msg_contents": "Hey folks,\n\nCentOS / PostgreSQL shop over here.\n\nI'm hitting 3 of my favorite lists with this, so here's hoping that\nthe BCC trick is the right way to do it :-)\n\nWe've just discovered thanks to a new Munin plugin\nhttp://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html\nthat our production DB is completely maxing out in I/O for about a 3\nhour stretch from 6am til 9am\nThis is \"device utilization\" as per the last graph at the above link.\n\nLoad went down for a while but is now between 70% and 95% sustained.\nWe've only had this plugin going for less than a day so I don't really\n have any more data going back further. But we've suspected a disk\nissue for some time - just have not been able to prove it.\n\nOur system\nIBM 3650 - quad 2Ghz e5405 Xeon\n8K SAS RAID Controller\n6 x 300G 15K/RPM SAS Drives\n/dev/sda - 2 drives configured as a RAID 1 for 300G for the OS\n/dev/sdb - 3 drives configured as RAID5 for 600G for the DB\n1 drive as a global hot spare\n\n/dev/sdb is the one that is maxing out.\n\nWe need to have a very serious look at fixing this situation. But we\ndon't have the money to be experimenting with solutions that won't\nsolve our problem. And our budget is fairly limited.\n\nIs there a public library somewhere of disk subsystems and their\nperformance figures? Done with some semblance of a standard\nbenchmark?\n\nOne benchmark I am partial to is this one :\nhttp://wiki.postgresql.org/wiki/PgCon_2009/Greg_Smith_Hardware_Benchmarking_notes#dd_test\n\nOne thing I am thinking of in the immediate term is taking the RAID5 +\nhot spare and converting it to RAID10 with the same amount of storage.\n Will that perform much better?\n\nIn general we are planning to move away from RAID5 toward RAID10.\n\nWe also have on order an external IBM array (don't have the exact name\non hand but model number was 3000) with 12 drive bays. We ordered it\nwith just 4 x SATAII drives, and were going to put it on a different\nsystem as a RAID10. These are just 7200 RPM drives - the goal was\ncheaper storage because the SAS drives are about twice as much per\ndrive, and it is only a 300G drive versus the 1T SATA2 drives. IIRC\nthe SATA2 drives are about $200 each and the SAS 300G drives about\n$500 each.\n\nSo I have 2 thoughts with this 12 disk array. 1 is to fill it up\nwith 12 x cheap SATA2 drives and hope that even though the spin-rate\nis a lot slower, that the fact that it has more drives will make it\nperform better. But somehow I am doubtful about that. The other\nthought is to bite the bullet and fill it up with 300G SAS drives.\n\nany thoughts here? recommendations on what to do with a tight budget?\n It could be the answer is that I just have to go back to the bean\ncounters and tell them we have no choice but to start spending some\nreal money. But on what? And how do I prove that this is the only\nchoice?\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n", "msg_date": "Fri, 9 Oct 2009 12:45:14 -0400", "msg_from": "Alan McKay <[email protected]>", "msg_from_op": true, "msg_subject": "disk I/O problems and Solutions" }, { "msg_contents": "----- \"Alan McKay\" <[email protected]> escreveu:\n> CentOS / PostgreSQL shop over here.\n> \n> Our system\n> IBM 3650 - quad 2Ghz e5405 Xeon\n> 8K SAS RAID Controller\n> 6 x 300G 15K/RPM SAS Drives\n> /dev/sda - 2 drives configured as a RAID 1 for 300G for the OS\n> /dev/sdb - 3 drives configured as RAID5 for 600G for the DB\n> 1 drive as a global hot spare\n> \n> /dev/sdb is the one that is maxing out.\n\nWhat are you calling \"maxing out\"? Excess IOPS, MB/s or high response times?\nEach of these have different approaches when trying to find out a solution.\n \n> Is there a public library somewhere of disk subsystems and their\n> performance figures? Done with some semblance of a standard\n> benchmark?\n\nyou should try using iostat or sar utilities. Both can give you complete reports of your online disk activity and probably were the tools in the backend used by your tool as the frontend.\n\nIt's very important to figure out that the percentage seen is all about CPU time used when in an I/O operation. If you have 100% you have to worry but not too desperatelly.\nWhat matters most for me is the disk operation response time and queue size. If you have these numbers increasing then your database performance will suffer.\n\nAlways check the man pages for iostat to understand what those numbers are all about.\n \n> One thing I am thinking of in the immediate term is taking the RAID5\n> +\n> hot spare and converting it to RAID10 with the same amount of\n> storage.\n> Will that perform much better?\n\nUsually yes for write operations because the raid controller doesn't have to calculate parity for the spare disk. You'll have some improvements in the disk seek time and your database will be snapier if you have an OLTP application.\n\nRAID5 can handle more IOPS, otherwise. It can be good for your pg_xlog directory, but the amount of disk space needed for WAL is just a small amount.\n\n> In general we are planning to move away from RAID5 toward RAID10.\n> \n> We also have on order an external IBM array (don't have the exact\n> name\n> on hand but model number was 3000) with 12 drive bays. We ordered it\n> with just 4 x SATAII drives, and were going to put it on a different\n> system as a RAID10. These are just 7200 RPM drives - the goal was\n> cheaper storage because the SAS drives are about twice as much per\n> drive, and it is only a 300G drive versus the 1T SATA2 drives. IIRC\n> the SATA2 drives are about $200 each and the SAS 300G drives about\n> $500 each.\n\nI think it's a good choice.\n\n> So I have 2 thoughts with this 12 disk array. 1 is to fill it up\n> with 12 x cheap SATA2 drives and hope that even though the spin-rate\n> is a lot slower, that the fact that it has more drives will make it\n> perform better. But somehow I am doubtful about that. The other\n> thought is to bite the bullet and fill it up with 300G SAS drives.\n> \n> any thoughts here? recommendations on what to do with a tight\n> budget?\n\nTake you new storage system when it arrives, make it RAID10 and administer it using LVM in Linux.\nIf you need greater performance later you will be able to make stripes between raid arrays.\n\nRegards\n\nFlavio Henrique A. Gurgel\nConsultor -- 4Linux\ntel. 55-11-2125.4765\nfax. 55-11-2125.4777\nwww.4linux.com.br\n", "msg_date": "Fri, 9 Oct 2009 16:03:42 -0300 (BRT)", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk I/O problems and Solutions" }, { "msg_contents": "On Fri, Oct 9, 2009 at 10:45 AM, Alan McKay <[email protected]> wrote:\n> Hey folks,\n>\n> CentOS / PostgreSQL shop over here.\n>\n> I'm hitting 3 of my favorite lists with this, so here's hoping that\n> the BCC trick is the right way to do it :-)\n\nI added pgsql-performance back in in my reply so we can share with the\nrest of the class.\n\n> We've just discovered thanks to a new Munin plugin\n> http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html\n> that our production DB is completely maxing out in I/O for about a 3\n> hour stretch from 6am til 9am\n> This is \"device utilization\" as per the last graph at the above link.\n\nWhat does vmstat, sar, or top have to say about it? If you're at 100%\nIO Wait, then yeah, your disk subsystem is your bottleneck.\n\n> Our system\n> IBM 3650 - quad 2Ghz e5405 Xeon\n> 8K SAS RAID Controller\n\nDoes this RAID controller have a battery backed cache on it?\n\n> 6 x 300G 15K/RPM SAS Drives\n> /dev/sda - 2 drives configured as a RAID 1 for 300G for the OS\n> /dev/sdb - 3 drives configured as RAID5 for 600G for the DB\n> 1 drive as a global hot spare\n>\n> /dev/sdb is the one that is maxing out.\n\nYeah, with RAID-5 that's not surprising. Especially if you've got\neven a decent / small percentage of writes in the mix, RAID-5 is gonna\nbe pretty slow.\n\n> We need to have a very serious look at fixing this situation.   But we\n> don't have the money to be experimenting with solutions that won't\n> solve our problem.  And our budget is fairly limited.\n>\n> Is there a public library somewhere of disk subsystems and their\n> performance figures?  Done with some semblance of a standard\n> benchmark?\n\nNot that I know of, and if there is, I'm as eager as you to find it.\n\nThis mailing list's archives are as close as I've come to finding it.\n\n> One benchmark I am partial to is this one :\n> http://wiki.postgresql.org/wiki/PgCon_2009/Greg_Smith_Hardware_Benchmarking_notes#dd_test\n>\n> One thing I am thinking of in the immediate term is taking the RAID5 +\n> hot spare and converting it to RAID10 with the same amount of storage.\n>  Will that perform much better?\n\nAlmost certainly.\n\n> In general we are planning to move away from RAID5 toward RAID10.\n>\n> We also have on order an external IBM array (don't have the exact name\n> on hand but model number was 3000) with 12 drive bays.  We ordered it\n> with just 4 x SATAII drives, and were going to put it on a different\n> system as a RAID10.  These are just 7200 RPM drives - the goal was\n> cheaper storage because the SAS drives are about twice as much per\n> drive, and it is only a 300G drive versus the 1T SATA2 drives.   IIRC\n> the SATA2 drives are about $200 each and the SAS 300G drives about\n> $500 each.\n\n> So I have 2 thoughts with this 12 disk array.   1 is to fill it up\n> with 12 x cheap SATA2 drives and hope that even though the spin-rate\n> is a lot slower, that the fact that it has more drives will make it\n> perform better.  But somehow I am doubtful about that.   The other\n> thought is to bite the bullet and fill it up with 300G SAS drives.\n\nI'd give the SATA drives a try. If they aren't fast enough, then\neverybody in the office gets a free / cheap drive upgrade in their\ndesktop machine. More drives == faster RAID-10 up to the point you\nsaturate your controller / IO bus on your machine\n", "msg_date": "Fri, 9 Oct 2009 13:22:57 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk I/O problems and Solutions" }, { "msg_contents": "On Fri, Oct 9, 2009 at 9:45 AM, Alan McKay <[email protected]> wrote:\n> We've just discovered thanks to a new Munin plugin\n> http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html\n> that our production DB is completely maxing out in I/O for about a 3\n> hour stretch from 6am til 9am\n> This is \"device utilization\" as per the last graph at the above link.\n\nAs Flavio mentioned, we really need to know if it's seek limited or\nbandwidth limited, but I suspect it's seek limited. Actual data from\nvmstat or sar would be helpful.\n\nAlso knowing what kind of raid controller is being used and whether or\nnot it has a BBU or not would be useful.\n\nAnd finally, you didn't mention what version of CentOS or PostgreSQL.\n\n> One thing I am thinking of in the immediate term is taking the RAID5 +\n> hot spare and converting it to RAID10 with the same amount of storage.\n>  Will that perform much better?\n\nDepends on how the array is IO limited. But in general, RAID10 >\nRAID5 in terms of performance.\n\n> So I have 2 thoughts with this 12 disk array.   1 is to fill it up\n> with 12 x cheap SATA2 drives and hope that even though the spin-rate\n> is a lot slower, that the fact that it has more drives will make it\n> perform better.  But somehow I am doubtful about that.   The other\n> thought is to bite the bullet and fill it up with 300G SAS drives.\n\nNot a bad idea. Keep in mind that your 15k drives can seek about\ntwice as fast as 7200 rpm drives, so you'll probably need close to\ntwice as many to match performance with the same configuration.\n\nIf you're random IO limited, though, RAID5 will only write about as\nfast as a single disk (but sometimes a LOT slower!) - a 12-disk RAID10\nwill write about 6 times faster than a single disk. So overall, the\n12 disk 7.2k RAID10 array should be significantly faster than the 3\ndisk 15k RAID5 array.\n\n> any thoughts here?  recommendations on what to do with a tight budget?\n>  It could be the answer is that I just have to go back to the bean\n> counters and tell them we have no choice but to start spending some\n> real money.  But on what?  And how do I prove that this is the only\n> choice?\n\nIt's hard to say without knowing all the information. One free\npossibility would be to move the log data onto the RAID1 from the\nRAID5, thus splitting up your database load over all of your disks.\nYou can do this by moving the pg_xlog folder to the RAID1 array and\nsymlink it back to your data folder. Should be able to try this with\njust a few seconds of downtime.\n\n-Dave\n", "msg_date": "Fri, 9 Oct 2009 16:08:56 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk I/O problems and Solutions" }, { "msg_contents": "\n> \n>> any thoughts here?  recommendations on what to do with a tight budget?\n>>  It could be the answer is that I just have to go back to the bean\n>> counters and tell them we have no choice but to start spending some\n>> real money.  But on what?  And how do I prove that this is the only\n>> choice?\n> \n> It's hard to say without knowing all the information. One free\n> possibility would be to move the log data onto the RAID1 from the\n> RAID5, thus splitting up your database load over all of your disks.\n> You can do this by moving the pg_xlog folder to the RAID1 array and\n> symlink it back to your data folder. Should be able to try this with\n> just a few seconds of downtime.\n> \n\nDo the above first.\nThen, on your sdb, set the scheduler to 'deadline'\nIf it is ext3, mount sdb as 'writeback,noatime'.\n\nIf you have your pg_xlog on your RAID 5, using ext3 in 'ordered' mode, then\nyou are going to be continuously throwing small writes at it. If this is\nthe case then the above configuration changes will easily double your\nperformance, most likely.\n\n\n> -Dave\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 9 Oct 2009 19:46:10 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: disk I/O problems and Solutions" } ]
[ { "msg_contents": "I have a system where it would be very useful for the primary keys for\na few tables to be UUIDs (actually MD5s of files, but UUID seems to be\nthe best 128-bit type available). What is the expected performance of\nusing a UUID as a primary key which will have numerous foreign\nreferences to it, versus using a 64-bit int (32-bit isn't big enough)?\n\n From the uuid.c in adt, it looks like a UUID is just stored as 8\nconsecutive bytes, and are compared using memcmp, whereas an int uses\nprimitive CPU instructions for comparison. Is that a significant\nissue with foreign key performance, or is it mostly just the size that\nthe key would take in all related tables?\n", "msg_date": "Fri, 9 Oct 2009 11:56:24 -0500", "msg_from": "tsuraan <[email protected]>", "msg_from_op": true, "msg_subject": "UUID as primary key" }, { "msg_contents": "On 10/09/2009 12:56 PM, tsuraan wrote:\n> I have a system where it would be very useful for the primary keys for\n> a few tables to be UUIDs (actually MD5s of files, but UUID seems to be\n> the best 128-bit type available). What is the expected performance of\n> using a UUID as a primary key which will have numerous foreign\n> references to it, versus using a 64-bit int (32-bit isn't big enough)?\n>\n> > From the uuid.c in adt, it looks like a UUID is just stored as 8\n> consecutive bytes, and are compared using memcmp, whereas an int uses\n> primitive CPU instructions for comparison. Is that a significant\n> issue with foreign key performance, or is it mostly just the size that\n> the key would take in all related tables?\n> \n\nThe most significant impact is that it takes up twice as much space, \nincluding the primary key index. This means fewer entries per block, \nwhich means slower scans and/or more blocks to navigate through. Still, \ncompared to the rest of the overhead of an index row or a table row, it \nis low - I think it's more important to understand whether you can get \naway with using a sequential integer, in which case UUID is unnecessary \noverhead - or whether you are going to need UUID anyways. If you need \nUUID anyways - having two primary keys is probably not worth it.\n\nCheers,\nmark\n\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Fri, 09 Oct 2009 14:30:17 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UUID as primary key" }, { "msg_contents": "> The most significant impact is that it takes up twice as much space,\n> including the primary key index. This means fewer entries per block,\n> which means slower scans and/or more blocks to navigate through. Still,\n> compared to the rest of the overhead of an index row or a table row, it\n> is low - I think it's more important to understand whether you can get\n> away with using a sequential integer, in which case UUID is unnecessary\n> overhead - or whether you are going to need UUID anyways. If you need\n> UUID anyways - having two primary keys is probably not worth it.\n\nOk, that's what I was hoping. Out of curiosity, is there a preferred\nway to store 256-bit ints in postgres? At that point, is a bytea the\nmost reasonable choice, or is there a better way to do it?\n", "msg_date": "Sat, 10 Oct 2009 00:14:27 -0500", "msg_from": "tsuraan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UUID as primary key" }, { "msg_contents": "On 10/10/2009 01:14 AM, tsuraan wrote:\n>> The most significant impact is that it takes up twice as much space,\n>> including the primary key index. This means fewer entries per block,\n>> which means slower scans and/or more blocks to navigate through. Still,\n>> compared to the rest of the overhead of an index row or a table row, it\n>> is low - I think it's more important to understand whether you can get\n>> away with using a sequential integer, in which case UUID is unnecessary\n>> overhead - or whether you are going to need UUID anyways. If you need\n>> UUID anyways - having two primary keys is probably not worth it.\n>> \n> Ok, that's what I was hoping. Out of curiosity, is there a preferred\n> way to store 256-bit ints in postgres? At that point, is a bytea the\n> most reasonable choice, or is there a better way to do it?\n> \n\nDo you need to be able to do queries on it? Numeric should be able to \nstore 256-bit integers.\n\nIf you don't need to do queries on it, an option I've considered in the \npast is to break it up into 4 x int64. Before UUID was supported, I had \nseriously considered storing UUID as 2 x int64. Now that UUID is \nsupported, you might also abuse UUID where 1 x 256-bit = 2 x UUID.\n\nIf you want it to be seemless and fully optimal, you would introduce a \nnew int256 type (or whatever the name of the type you are trying to \nrepresent). Adding new types to PostgreSQL is not that hard. This would \nallow queries (=, <>, <, >) as well.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sat, 10 Oct 2009 11:40:33 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UUID as primary key" }, { "msg_contents": "On Oct 10, 2009, at 10:40 AM, Mark Mielke wrote:\n> On 10/10/2009 01:14 AM, tsuraan wrote:\n>>> The most significant impact is that it takes up twice as much space,\n>>> including the primary key index. This means fewer entries per block,\n>>> which means slower scans and/or more blocks to navigate through. \n>>> Still,\n>>> compared to the rest of the overhead of an index row or a table \n>>> row, it\n>>> is low - I think it's more important to understand whether you \n>>> can get\n>>> away with using a sequential integer, in which case UUID is \n>>> unnecessary\n>>> overhead - or whether you are going to need UUID anyways. If you \n>>> need\n>>> UUID anyways - having two primary keys is probably not worth it.\n>>>\n>> Ok, that's what I was hoping. Out of curiosity, is there a preferred\n>> way to store 256-bit ints in postgres? At that point, is a bytea the\n>> most reasonable choice, or is there a better way to do it?\n>>\n>\n> Do you need to be able to do queries on it? Numeric should be able \n> to store 256-bit integers.\n>\n> If you don't need to do queries on it, an option I've considered in \n> the past is to break it up into 4 x int64. Before UUID was \n> supported, I had seriously considered storing UUID as 2 x int64. \n> Now that UUID is supported, you might also abuse UUID where 1 x 256- \n> bit = 2 x UUID.\n>\n> If you want it to be seemless and fully optimal, you would \n> introduce a new int256 type (or whatever the name of the type you \n> are trying to represent). Adding new types to PostgreSQL is not \n> that hard. This would allow queries (=, <>, <, >) as well.\n\n\nIf you want an example of that, we had Command Prompt create a full \nset of hash datatypes (SHA*, and I think md5). That stuff should be \non pgFoundry; if it's not drop me a note at [email protected] and \nI'll get it added.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Fri, 16 Oct 2009 11:30:51 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UUID as primary key" }, { "msg_contents": "decibel escribi�:\n\n> >If you want it to be seemless and fully optimal, you would\n> >introduce a new int256 type (or whatever the name of the type you\n> >are trying to represent). Adding new types to PostgreSQL is not\n> >that hard. This would allow queries (=, <>, <, >) as well.\n> \n> If you want an example of that, we had Command Prompt create a full\n> set of hash datatypes (SHA*, and I think md5). That stuff should be\n> on pgFoundry; if it's not drop me a note at [email protected]\n> and I'll get it added.\n\nIt's at project \"shatypes\".\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 16 Oct 2009 19:01:56 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UUID as primary key" }, { "msg_contents": "On Oct 10, 2009, at 10:40 AM, Mark Mielke wrote:\n> On 10/10/2009 01:14 AM, tsuraan wrote:\n>>> The most significant impact is that it takes up twice as much space,\n>>> including the primary key index. This means fewer entries per block,\n>>> which means slower scans and/or more blocks to navigate through. \n>>> Still,\n>>> compared to the rest of the overhead of an index row or a table \n>>> row, it\n>>> is low - I think it's more important to understand whether you \n>>> can get\n>>> away with using a sequential integer, in which case UUID is \n>>> unnecessary\n>>> overhead - or whether you are going to need UUID anyways. If you \n>>> need\n>>> UUID anyways - having two primary keys is probably not worth it.\n>>>\n>> Ok, that's what I was hoping. Out of curiosity, is there a preferred\n>> way to store 256-bit ints in postgres? At that point, is a bytea the\n>> most reasonable choice, or is there a better way to do it?\n>>\n>\n> Do you need to be able to do queries on it? Numeric should be able \n> to store 256-bit integers.\n>\n> If you don't need to do queries on it, an option I've considered in \n> the past is to break it up into 4 x int64. Before UUID was \n> supported, I had seriously considered storing UUID as 2 x int64. \n> Now that UUID is supported, you might also abuse UUID where 1 x 256- \n> bit = 2 x UUID.\n>\n> If you want it to be seemless and fully optimal, you would \n> introduce a new int256 type (or whatever the name of the type you \n> are trying to represent). Adding new types to PostgreSQL is not \n> that hard. This would allow queries (=, <>, <, >) as well.\n\n\nIf you want an example of that, we had Command Prompt create a full \nset of hash datatypes (SHA*, and I think md5). That stuff should be \non pgFoundry; if it's not drop me a note at [email protected] and \nI'll get it added.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Fri, 16 Oct 2009 22:51:50 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UUID as primary key" } ]
[ { "msg_contents": "I am seeking advice on what the best setup for the following would be.\n\n \n\nMy company provides a hosted web calendaring solution for school\ndistricts. For each school district we have a separate database. Each\ndatabase has 57 tables. There are a total of 649 fields in those\ntables. Here is a table of the different kinds of field and how many\nthere are:\n\n \n\ntime without time zone\n\nbytea\n\ndate\n\nsmallint\n\nboolean\n\ninteger\n\ntimestamp without time zone\n\nnumeric\n\ntext\n\n9\n\n4\n\n8\n\n1\n\n79\n\n195\n\n36\n\n8\n\n309\n\n \n\n \n\nOver the next couple of months we will be creating an instance of our\nsolution for each public school district in the US which is around\n18,000. That means currently we would be creating 18,000 databases (all\non one server right now - which is running 8.4). I am assuming this is\nprobably not the best way of doing things.\n\n \n\nI have read up on schemas and it looks like a good change to make would\nbe to create 1 database with 18,000 schemas.\n\n \n\nWould that be a good idea? What sort of issues should I be aware of\n(administrative, management, performance, etc...)? Is that too many\nschemas to put into 1 database? What are the limits on the number of\ndatabases and schemas you can create?\n\n \n\nShould I try to re-engineer things so that all 18,000 instances only use\n1 database and 1 schema?\n\n \n\nLet me know if you need any more info.\n\n \n\nAny advice and information would be greatly appreciated.\n\n \n\nRegards,\n\n \n\nScott Otis\n\nCIO / Lead Developer\n\nIntand\n\nwww.intand.com\n\n \n\n\n\n\n\n\n\n\n\n\n\nI am seeking advice on what the best setup for the following\nwould be.\n \nMy company provides a hosted web calendaring solution for\nschool districts.  For each school district we have a separate database. \nEach database has 57 tables.  There are a total of 649 fields in those\ntables.  Here is a table of the different kinds of field and how many\nthere are:\n \n\n\n\ntime without time zone\nbytea\ndate\nsmallint\nboolean\ninteger\ntimestamp without time zone\nnumeric\ntext\n\n\n9\n4\n8\n1\n79\n195\n36\n8\n309\n\n\n\n \n \nOver the next couple of months we will be creating an\ninstance of our solution for each public school district in the US which is around\n18,000.  That means currently we would be creating 18,000 databases (all\non one server right now – which is running 8.4).  I am assuming this\nis probably not the best way of doing things.\n \nI have read up on schemas and it looks like a good change to\nmake would be to create 1 database with 18,000 schemas.\n \nWould that be a good idea?  What sort of issues should I\nbe aware of (administrative, management, performance, etc…)?  Is\nthat too many schemas to put into 1 database?  What are the limits on the\nnumber of databases and schemas you can create?\n \nShould I try to re-engineer things so that all 18,000\ninstances only use 1 database and 1 schema?\n \nLet me know if you need any more info.\n \nAny advice and information would be greatly appreciated.\n \nRegards,\n \nScott Otis\nCIO / Lead Developer\nIntand\nwww.intand.com", "msg_date": "Fri, 9 Oct 2009 10:46:49 -0700", "msg_from": "\"Scott Otis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Databases vs Schemas" }, { "msg_contents": "Scott Otis wrote:\n>\n> I am seeking advice on what the best setup for the following would be.\n>\n> \n>\n> My company provides a hosted web calendaring solution for school \n> districts. For each school district we have a separate database. \n> Each database has 57 tables.\n>\n....\n>\n> Over the next couple of months we will be creating an instance of our \n> solution for each public school district in the US which is around \n> 18,000. \n>\n\n \nWhy are you trying to keep all this information on one server? It seems \nlike you have such perfectly independent silos of data, why not take the \nopportunity to scale out horizontally? It's usually a lot cheaper to buy \n4 machines of power x than one machine of power (4x).\n", "msg_date": "Fri, 09 Oct 2009 11:18:09 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas" }, { "msg_contents": "On Fri, Oct 9, 2009 at 1:46 PM, Scott Otis <[email protected]> wrote:\n> Over the next couple of months we will be creating an instance of our solution for each public school district in the US which is around 18,000.  That means currently we would be creating 18,000 databases (all on one server right now – which is running 8.4).  I am assuming this is probably not the best way of doing things.\n\nSchema advantages:\n*) maintenance advantages; all functions/trigger functions can be\nshared. HUGE help if you use them\n*) can query shared data between schemas without major headaches\n*) a bit more efficiency especially if private data areas are small.\nkinda analogous to processes vs threads\n*) Can manage the complete system without changing database sessions.\nThis is the kicker IMO.\n\nDatabase Advantages:\n*) More discrete. Easier to distinctly create, dump, drop, or move to\nseparate server\n*) Smaller system catalogs might give efficiency benefits\n\nmerlin\n", "msg_date": "Fri, 9 Oct 2009 17:02:20 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas" }, { "msg_contents": "\n\n\nOn 10/9/09 2:02 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On Fri, Oct 9, 2009 at 1:46 PM, Scott Otis <[email protected]> wrote:\n>> Over the next couple of months we will be creating an instance of our\n>> solution for each public school district in the US which is around 18,000. \n>> That means currently we would be creating 18,000 databases (all on one server\n>> right now ­ which is running 8.4).  I am assuming this is probably not the\n>> best way of doing things.\n> \n> Schema advantages:\n> *) maintenance advantages; all functions/trigger functions can be\n> shared. HUGE help if you use them\n> *) can query shared data between schemas without major headaches\n> *) a bit more efficiency especially if private data areas are small.\n> kinda analogous to processes vs threads\n> *) Can manage the complete system without changing database sessions.\n> This is the kicker IMO.\n> \n> Database Advantages:\n> *) More discrete. Easier to distinctly create, dump, drop, or move to\n> separate server\n> *) Smaller system catalogs might give efficiency benefits\n> \n\nI'm concerned how a system with 57 * 18000 > 1M tables will function.\n\nI've got 200,000 tables in one db (8.4), and some tools barely work. The\nsystem catalogs get inefficient when large and psql especially has trouble.\nTab completion takes forever, even if I make a schema \"s\" with one table in\nit and type \"s.\" and try and tab complete -- its as if its scanning all\nwithout a schema qualifier or using an index. Sometimes it does not match\nvalid tables at all, and sometimes regex matching fails too ('\\dt\nschema.*_*_*' intermittently flakes out if it returns a lot of matches).\nOther than that the number of tables doesn't seem to cause much performance\ntrouble. The only exception is constraint exclusion which is fundamentally\nbroken with too many tables on the performance and memory consumption side.\n\nHaving a lot of tables really makes me wish VACUUM, ANALYZE, and other\nmaintenance tools could partially matched object names with regex though.\n\nOn the other hand, lots of databases probably has performance drawbacks too.\nAnd its maintenance drawbacks are even bigger.\n\nI certainly don't see any reason at all to try and put all of these in one\nschema. The only useful choices are schemas vs databases. I'd go for\nschemas unless the performance issues there are a problem. Schemas can be\ndumped/restored/backed up independent of one another easily too.\n\n> merlin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 9 Oct 2009 19:50:42 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> I've got 200,000 tables in one db (8.4), and some tools barely work. The\n> system catalogs get inefficient when large and psql especially has trouble.\n> Tab completion takes forever, even if I make a schema \"s\" with one table in\n> it and type \"s.\" and try and tab complete -- its as if its scanning all\n> without a schema qualifier or using an index.\n\nThe tab-completion queries have never been vetted for performance\nparticularly :-(\n\nJust out of curiosity, how much does this help?\n\nalter function pg_table_is_visible(oid) cost 10;\n\n(You'll need to do it as superuser --- if it makes things worse, just\nset the cost back to 1.)\n\n> Sometimes it does not match\n> valid tables at all, and sometimes regex matching fails too ('\\dt\n> schema.*_*_*' intermittently flakes out if it returns a lot of matches).\n\nThere are some arbitrary \"LIMIT 1000\" clauses in those queries, which\nprobably explains this ... but taking them out would likely cause\nlibreadline to get indigestion ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Oct 2009 23:11:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas " }, { "msg_contents": "On Fri, Oct 9, 2009 at 10:50 PM, Scott Carey <[email protected]> wrote:\n> On 10/9/09 2:02 PM, \"Merlin Moncure\" <[email protected]> wrote:\n>\n>> On Fri, Oct 9, 2009 at 1:46 PM, Scott Otis <[email protected]> wrote:\n>>> Over the next couple of months we will be creating an instance of our\n>>> solution for each public school district in the US which is around 18,000.\n>>> That means currently we would be creating 18,000 databases (all on one server\n>>> right now ­ which is running 8.4).  I am assuming this is probably not the\n>>> best way of doing things.\n>>\n>> Schema advantages:\n>> *) maintenance advantages; all functions/trigger functions can be\n>> shared.  HUGE help if you use them\n>> *) can query shared data between schemas without major headaches\n>> *) a bit more efficiency especially if private data areas are small.\n>> kinda analogous to processes vs threads\n>> *) Can manage the complete system without changing database sessions.\n>> This is the kicker IMO.\n>>\n>> Database Advantages:\n>> *) More discrete.  Easier to distinctly create, dump, drop, or move to\n>> separate server\n>> *) Smaller system catalogs might give efficiency benefits\n>>\n>\n> I'm concerned how a system with 57 * 18000 > 1M tables will function.\n>\n> I've got 200,000 tables in one db (8.4), and some tools barely work.  The\n> system catalogs get inefficient when large and psql especially has trouble.\n> Tab completion takes forever, even if I make a schema \"s\" with one table in\n> it and type \"s.\" and try and tab complete -- its as if its scanning all\n> without a schema qualifier or using an index.  Sometimes it does not match\n> valid tables at all, and sometimes regex matching fails too ('\\dt\n> schema.*_*_*' intermittently flakes out if it returns a lot of matches).\n> Other than that the number of tables doesn't seem to cause much performance\n> trouble.  The only exception is constraint exclusion which is fundamentally\n> broken with too many tables on the performance and memory consumption side.\n>\n> Having a lot of tables really makes me wish VACUUM, ANALYZE, and other\n> maintenance tools could partially matched object names with regex though.\n>\n> On the other hand, lots of databases probably has performance drawbacks too.\n> And its maintenance drawbacks are even bigger.\n>\n> I certainly don't see any reason at all to try and put all of these in one\n> schema.  The only useful choices are schemas vs databases.  I'd go for\n> schemas unless the performance issues there are a problem.   Schemas can be\n> dumped/restored/backed up independent of one another easily too.\n\nThey can, but: drop schema foo cascade; is a different operation than:\ndrop database foo; The first is kinda surgical and the second is a\nrocket launcher. What would you rather have in battle?\n\nFor the record, just about every database I've ever designed has had\nsome of what I call 'de facto table partitioning' using\nschemas/search_path tricks. I'm working on a system right now that is\ngoing to get very large and if I started to run into psql problems I'd\nprobably look at patching it, maybe \\set an option to simplify some\nof the queries.\n\nmerlin\n", "msg_date": "Sat, 10 Oct 2009 09:26:19 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas" }, { "msg_contents": "On Fri, Oct 9, 2009 at 11:11 PM, Tom Lane <[email protected]> wrote:\n\n> Scott Carey <[email protected]> writes:\n> > I've got 200,000 tables in one db (8.4), and some tools barely work. The\n> > system catalogs get inefficient when large and psql especially has\n> trouble.\n> > Tab completion takes forever, even if I make a schema \"s\" with one table\n> in\n> > it and type \"s.\" and try and tab complete -- its as if its scanning all\n> > without a schema qualifier or using an index.\n>\n> The tab-completion queries have never been vetted for performance\n> particularly :-(\n>\n> Just out of curiosity, how much does this help?\n>\n> alter function pg_table_is_visible(oid) cost 10;\n>\n> (You'll need to do it as superuser --- if it makes things worse, just\n> set the cost back to 1.)\n>\n> > Sometimes it does not match\n> > valid tables at all, and sometimes regex matching fails too ('\\dt\n> > schema.*_*_*' intermittently flakes out if it returns a lot of matches).\n>\n> There are some arbitrary \"LIMIT 1000\" clauses in those queries, which\n> probably explains this ... but taking them out would likely cause\n> libreadline to get indigestion ...\n>\n> regards, tom lane\n\n\nWe ran into this exact situation with a pg 8.3 database and a very large\nnumber of tables. psql would wait for 20 to 30 seconds if the user was\nunlucky enough to hit the tab key. After doing some research with query\nlogging, explain analyze and some trial and error, we came to the same\nconclusion. Altering the cost for the pg_table_is_visible function to 10\nfixed our performance problem immediately. It appears that when the cost\nwas set to 1, that the query optimizer first ran the function over the\nentire pg_class table. By increasing the cost, it now only runs the\nfunction over the rows returned by the other items in the where clause.\n\n-chris\n\nOn Fri, Oct 9, 2009 at 11:11 PM, Tom Lane <[email protected]> wrote:\nScott Carey <[email protected]> writes:\n> I've got 200,000 tables in one db (8.4), and some tools barely work.  The\n> system catalogs get inefficient when large and psql especially has trouble.\n> Tab completion takes forever, even if I make a schema \"s\" with one table in\n> it and type \"s.\" and try and tab complete -- its as if its scanning all\n> without a schema qualifier or using an index.\n\nThe tab-completion queries have never been vetted for performance\nparticularly :-(\n\nJust out of curiosity, how much does this help?\n\nalter function pg_table_is_visible(oid) cost 10;\n\n(You'll need to do it as superuser --- if it makes things worse, just\nset the cost back to 1.)\n\n> Sometimes it does not match\n> valid tables at all, and sometimes regex matching fails too ('\\dt\n> schema.*_*_*' intermittently flakes out if it returns a lot of matches).\n\nThere are some arbitrary \"LIMIT 1000\" clauses in those queries, which\nprobably explains this ... but taking them out would likely cause\nlibreadline to get indigestion ...\n\n                        regards, tom lane We ran into this exact situation with a pg 8.3 database and a very large number of tables.  psql would wait for 20 to 30 seconds if the user was unlucky enough to hit the tab key.  After doing some research with query logging, explain analyze and some trial and error, we came to the same conclusion.  Altering the cost for the pg_table_is_visible function to 10 fixed our performance problem immediately.  It appears that when the cost was set to 1, that the query optimizer first ran the function over the entire pg_class table.  By increasing the cost, it now only runs the function over the rows returned by the other items in the where clause.\n-chris", "msg_date": "Sat, 10 Oct 2009 10:44:35 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas" }, { "msg_contents": "On Sat, Oct 10, 2009 at 8:44 AM, Chris Kratz <[email protected]> wrote:\n>>\n>> alter function pg_table_is_visible(oid) cost 10;\n>>\n>> (You'll need to do it as superuser --- if it makes things worse, just\n>> set the cost back to 1.)\n>>\n>> > Sometimes it does not match\n>> > valid tables at all, and sometimes regex matching fails too ('\\dt\n>> > schema.*_*_*' intermittently flakes out if it returns a lot of matches).\n>>\n>> There are some arbitrary \"LIMIT 1000\" clauses in those queries, which\n>> probably explains this ... but taking them out would likely cause\n>> libreadline to get indigestion ...\n>>\n>>                        regards, tom lane\n>\n>\n> We ran into this exact situation with a pg 8.3 database and a very large\n> number of tables.  psql would wait for 20 to 30 seconds if the user was\n> unlucky enough to hit the tab key.  After doing some research with query\n> logging, explain analyze and some trial and error, we came to the same\n> conclusion.  Altering the cost for the pg_table_is_visible function to 10\n> fixed our performance problem immediately.  It appears that when the cost\n> was set to 1, that the query optimizer first ran the function over the\n> entire pg_class table.  By increasing the cost, it now only runs the\n> function over the rows returned by the other items in the where clause.\n\nWe have a large number of objects in our db and this worked for me\ntoo! Thanks a lot. As a side note, it also makes slony create set\nstuff run really really slow as well, and I'm guessing there's a\nsimilar trick for the slony functions I can add and see if it helps.\n", "msg_date": "Sat, 10 Oct 2009 15:34:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Databases vs Schemas" } ]
[ { "msg_contents": "Hello All --\n\nI have implemented table partitioning in order to increase performance \nin my database-backed queuing system. My queue is partitioned by \njob_id into separate tables that all inherit from a base \"queue\" table.\n\nThings were working swimmingly until my system started managing \nthousands of jobs. As soon as I had ~1070 queue subtables, queries to \nthe main queue table would fail with:\n\n\t\"out of shared memory HINT: You might need to increase \nmax_locks_per_transaction\"\n\nI found this thread on the archives:\n\n\thttp://archives.postgresql.org/pgsql-general/2007-08/msg01992.php\n\nStill, I have a few questions/problems:\n\n1) We've already tuned postgres to use ~2BG of shared memory -- which \nis SHMAX for our kernel. If I try to increase \nmax_locks_per_transaction, postgres will not start because our shared \nmemory is exceeding SHMAX. How can I increase \nmax_locks_per_transaction without having my shared memory requirements \nincrease?\n\n2) Why do I need locks for all of my subtables, anyways? I have \nconstraint_exclusion on. The query planner tells me that I am only \nusing three tables for the queries that are failing. Why are all of \nthe locks getting allocated? Is there any way to prevent this?\n\nMany thanks in advance for any and all help anyone can provide!\n\nBrian\n", "msg_date": "Sat, 10 Oct 2009 13:55:19 -0700", "msg_from": "Brian Karlak <[email protected]>", "msg_from_op": true, "msg_subject": "table partitioning & max_locks_per_transaction" }, { "msg_contents": "Brian Karlak <[email protected]> writes:\n> \t\"out of shared memory HINT: You might need to increase \n> max_locks_per_transaction\"\n\nYou want to do what it says ...\n\n> 1) We've already tuned postgres to use ~2BG of shared memory -- which \n> is SHMAX for our kernel. If I try to increase \n> max_locks_per_transaction, postgres will not start because our shared \n> memory is exceeding SHMAX. How can I increase \n> max_locks_per_transaction without having my shared memory requirements \n> increase?\n\nBack off shared_buffers a bit? 2GB is certainly more than enough\nto run Postgres in.\n\n> 2) Why do I need locks for all of my subtables, anyways? I have \n> constraint_exclusion on. The query planner tells me that I am only \n> using three tables for the queries that are failing. Why are all of \n> the locks getting allocated?\n\nBecause the planner has to look at all the subtables and make sure\nthat they in fact don't match the query. So it takes AccessShareLock\non each one, which is the minimum strength lock needed to be sure that\nthe table definition isn't changing underneath you. Without *some* lock\nit's not really safe to examine the table at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Oct 2009 22:56:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table partitioning & max_locks_per_transaction " }, { "msg_contents": "\nTom --\n\nThanks for the pointers and advice. We've started by doubling \nmax_locks and halving shared_buffers, we'll see how it goes.\n\nBrian\n\nOn Oct 10, 2009, at 7:56 PM, Tom Lane wrote:\n\n> Brian Karlak <[email protected]> writes:\n>> \t\"out of shared memory HINT: You might need to increase\n>> max_locks_per_transaction\"\n>\n> You want to do what it says ...\n>\n>> 1) We've already tuned postgres to use ~2BG of shared memory -- which\n>> is SHMAX for our kernel. If I try to increase\n>> max_locks_per_transaction, postgres will not start because our shared\n>> memory is exceeding SHMAX. How can I increase\n>> max_locks_per_transaction without having my shared memory \n>> requirements\n>> increase?\n>\n> Back off shared_buffers a bit? 2GB is certainly more than enough\n> to run Postgres in.\n>\n>> 2) Why do I need locks for all of my subtables, anyways? I have\n>> constraint_exclusion on. The query planner tells me that I am only\n>> using three tables for the queries that are failing. Why are all of\n>> the locks getting allocated?\n>\n> Because the planner has to look at all the subtables and make sure\n> that they in fact don't match the query. So it takes AccessShareLock\n> on each one, which is the minimum strength lock needed to be sure that\n> the table definition isn't changing underneath you. Without *some* \n> lock\n> it's not really safe to examine the table at all.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Sun, 11 Oct 2009 09:44:31 -0700", "msg_from": "Brian Karlak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table partitioning & max_locks_per_transaction " } ]
[ { "msg_contents": "I have a 30,000,000 records table, counts the record number to need for 40 seconds. \nThe table has a primary key on column id;\n\nperf=# explain select count(*) from test;\n...\n-----------------------------------------\nAggregate (cost=603702.80..603702.81 rows=1 width=0)\n  -> Seq scan on test (cost=0.00..527681.04 rows=30408704 width=0)\n...\nperf=# select count(*) from test;\ncount\n------------\n30408704\n\nperf=#\n\n\nThe postgresql database uses the table full scan.but in oracle, the similar SQL uses the index full scanning,speed quickly many than postgresql.  \n\npostgresql's optimizer whether to have the necessity to make the adjustment? \n \n\n\n\n\n ___________________________________________________________ \n 好玩贺卡等你发,邮箱贺卡全新上线! \nhttp://card.mail.cn.yahoo.com/\nI have a 30,000,000 records table, counts the record number to need for 40 seconds. The table has a primary key on column id;perf=# explain select count(*) from test;...-----------------------------------------Aggregate (cost=603702.80..603702.81 rows=1 width=0)  -> Seq scan on test (cost=0.00..527681.04 rows=30408704 width=0)...perf=# select count(*) from test;count------------30408704perf=#The postgresql database uses the table full scan.but in oracle, the similar SQL uses the index full scanning,speed quickly many than postgresql.  postgresql's optimizer whether to have the necessity to make the adjustment?  \n 好玩贺卡等你发,邮箱贺卡全新上线!", "msg_date": "Sun, 11 Oct 2009 18:26:10 +0800 (CST)", "msg_from": "=?utf-8?B?5pet5paMIOijtA==?= <[email protected]>", "msg_from_op": true, "msg_subject": "table full scan or index full scan?" }, { "msg_contents": "旭斌 裴 escreveu:\n> The postgresql database uses the table full scan.but in oracle, the\n> similar SQL uses the index full scanning,speed quickly many than\n> postgresql. \n> \nThis was discussed many times on the pgsql mailing lists. Search the archives.\nAlso, take a look at [1].\n\n[1] http://wiki.postgresql.org/wiki/Slow_Counting\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Sun, 18 Oct 2009 03:00:22 -0200", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table full scan or index full scan?" }, { "msg_contents": "I don't know if this will help. In my days with Oracle and Sybase, it use\nto work for both. Just give PG a hint like this\nselect count(*) from test where id > 0;\n\nYou can try it while you wait for other on the list with more knowledge for\na different idea.\n\nMel\n\n\nOn Sun, Oct 11, 2009 at 4:26 AM, 旭斌 裴 <[email protected]> wrote:\n\n>\n> I have a 30,000,000 records table, counts the record number to need for 40\n> seconds.\n> The table has a primary key on column id;\n>\n> perf=# explain select count(*) from test;\n> ...\n> -----------------------------------------\n> Aggregate (cost=603702.80..603702.81 rows=1 width=0)\n> -> Seq scan on test (cost=0.00..527681.04 rows=30408704 width=0)\n> ...\n> perf=# select count(*) from test;\n> count\n> ------------\n> 30408704\n>\n> perf=#\n>\n>\n> The postgresql database uses the table full scan.but in oracle, the similar\n> SQL uses the index full scanning,speed quickly many than postgresql.\n>\n> postgresql's optimizer whether to have the necessity to make the\n> adjustment?\n>\n>\n>\n> ------------------------------\n> 好玩贺卡等你发,邮箱贺卡全新上线!<http://cn.rd.yahoo.com/mail_cn/tagline/card/*http://card.mail.cn.yahoo.com/>\n\nI don't know if this will help.  In my days with Oracle and Sybase, it use to work for both.  Just give PG a hint like thisselect count(*) from test where id > 0;You can try it while you wait for other on the list with more knowledge for a different idea.\nMelOn Sun, Oct 11, 2009 at 4:26 AM, 旭斌 裴 <[email protected]> wrote:\nI have a 30,000,000 records table, counts the record number to need for 40 seconds. The table has a primary key on column id;\nperf=# explain select count(*) from test;...-----------------------------------------Aggregate (cost=603702.80..603702.81 rows=1 width=0)  -> Seq scan on test (cost=0.00..527681.04 rows=30408704 width=0)\n...perf=# select count(*) from test;count------------30408704perf=#The postgresql database uses the table full scan.but in oracle, the similar SQL uses the index full scanning,speed quickly many than postgresql.  \npostgresql's optimizer whether to have the necessity to make the adjustment?  \n 好玩贺卡等你发,邮箱贺卡全新上线!", "msg_date": "Sat, 17 Oct 2009 23:07:36 -0600", "msg_from": "Melton Low <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table full scan or index full scan?" } ]
[ { "msg_contents": "Hi,\nWe have table 'user' and one column define status of user, currently\nthere are 2 valuse 'A' acitve and 'D' deleted.\nCurrently we define column as domain type ( status_domain with two\npossible values)\nbut I'm not sure is it good solution, maybe it is better create\nseparate table e.g account_stats and use foreign key in account table?\nIn our databases we prefer 'domain' solution for column with low\ncardinality and when we do not need extra fields related to values\n(e.g description). I think such solution should give us better\nperformance when rows are updated/inserted but I've never make real\ncomparision to separate table. Havy you made such comparision?\n\nRegards\nMichal Szymanski\nhttp://blog.szymanskich.net\nhttp://techblog.szymanskich.net\n", "msg_date": "Sun, 11 Oct 2009 07:40:53 -0700 (PDT)", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": true, "msg_subject": "Domain vs table" }, { "msg_contents": "I think I've found answer to my question\nhttp://www.commandprompt.com/blogs/joshua_drake/2009/01/fk_check_enum_or_domain_that_is_the_question/\n\nMichal Szymanski\n", "msg_date": "Sun, 11 Oct 2009 08:31:14 -0700 (PDT)", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Domain vs table" }, { "msg_contents": "On Sun, Oct 11, 2009 at 11:31 AM, Michal Szymanski <[email protected]> wrote:\n> I think I've found answer to my question\n> http://www.commandprompt.com/blogs/joshua_drake/2009/01/fk_check_enum_or_domain_that_is_the_question/\n>\n\nI mostly agree with the comments on the blog but let me throw a couple\nmore points out there:\n\n*) It is possible (although not necessarily advised) to manipulate\nenums via direct manipulation of pg_enum\n*) enums are the best solution if you need natural ordering properties\nfor indexing purposes\n*) domains can't be used in arrays\n*) foreign key is obviously preferred if you need store more related\nproperties than the value itself\n*) if the constraint is complicated (not just a list of values), maybe\ndomain/check constraint is preferred, possibly hooked to immutable\nfunction\n\nmerlin\n", "msg_date": "Tue, 20 Oct 2009 07:55:34 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Domain vs table" }, { "msg_contents": "On Oct 20, 2009, at 6:55 AM, Merlin Moncure wrote:\n> On Sun, Oct 11, 2009 at 11:31 AM, Michal Szymanski \n> <[email protected]> wrote:\n>> I think I've found answer to my question\n>> http://www.commandprompt.com/blogs/joshua_drake/2009/01/ \n>> fk_check_enum_or_domain_that_is_the_question/\n>>\n>\n> I mostly agree with the comments on the blog but let me throw a couple\n> more points out there:\n>\n> *) It is possible (although not necessarily advised) to manipulate\n> enums via direct manipulation of pg_enum\n> *) enums are the best solution if you need natural ordering properties\n> for indexing purposes\n> *) domains can't be used in arrays\n> *) foreign key is obviously preferred if you need store more related\n> properties than the value itself\n> *) if the constraint is complicated (not just a list of values), maybe\n> domain/check constraint is preferred, possibly hooked to immutable\n> function\n\n\nAlso, if the base table will have a very large number of rows \n(probably at least 10M), the overhead of a text datatype over a \nsmallint or int/oid gets to be very large.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sun, 25 Oct 2009 15:43:56 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Domain vs table" } ]
[ { "msg_contents": "Hi\n\nI used the vacuumdb command. But in its output I cann't see VACUUM.\n\nThe last part of output is\n\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n1 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: free space map contains 768 pages in 392 relations\nDETAIL: A total of 6720 page slots are in use (including overhead).\n6720 page slots are required to track all free space.\nCurrent limits are: 153600 page slots, 1000 relations, using 965 kB.\n\n\nI think if the process is complete then last part of output is VACUUM.\nIs it means the process is not complete?\nPls help me to clear my doubts.\n\n-- \nRegards\nSoorjith P\n\nHiI used the vacuumdb command. But in its output I cann't see VACUUM. The last part of output isDETAIL:  0 dead row versions cannot be removed yet.There were 0 unused item pointers.\n\n1 pages contain useful free space.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  free space map contains 768 pages in 392 relationsDETAIL:  A total of 6720 page slots are in use (including overhead).\n\n6720 page slots are required to track all free space.Current limits are:  153600 page slots, 1000 relations, using 965 kB.I think if the process is complete then last part of output is VACUUM.\n\nIs it means the process is not complete?Pls help me to clear my doubts.-- RegardsSoorjith P", "msg_date": "Sun, 11 Oct 2009 21:01:33 +0530", "msg_from": "soorjith p <[email protected]>", "msg_from_op": true, "msg_subject": "vacuumdb command" }, { "msg_contents": "soorjith p wrote:\n> I used the vacuumdb command. But in its output I cann't see VACUUM.\n> \n> The last part of output is\n> \n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 1 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: free space map contains 768 pages in 392 relations\n> DETAIL: A total of 6720 page slots are in use (including overhead).\n> 6720 page slots are required to track all free space.\n> Current limits are: 153600 page slots, 1000 relations, using 965 kB.\n> \n> \n> I think if the process is complete then last part of output is VACUUM.\n> Is it means the process is not complete?\n\nNo. It is complete.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 11 Oct 2009 11:59:37 -0400", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuumdb command" } ]
[ { "msg_contents": "Hi,\n\nCan anybody highlight how to use unnest function from postgres 8.4 on\nmulti-dimensional array?\n\nBelow is the sample table structure:\n\nTable \"public.multi_array_test\"\n Column | Type | Modifiers\n---------+----------+-----------\n id | integer |\n user_id | bigint[] |\n\nSample data:\n\n 1 | {{3567559397,0},{3020933367,1},{2479094216,2},{3310282955,3}}\n\nRegards,\nNimesh.\n\nHi,Can anybody highlight how to use unnest function from postgres 8.4 on multi-dimensional array?Below is the sample table structure:Table \"public.multi_array_test\" Column  |   Type   | Modifiers\n---------+----------+----------- id      | integer  | user_id | bigint[] |Sample data:  1 | {{3567559397,0},{3020933367,1},{2479094216,2},{3310282955,3}}Regards,Nimesh.", "msg_date": "Mon, 12 Oct 2009 14:17:15 +0530", "msg_from": "Nimesh Satam <[email protected]>", "msg_from_op": true, "msg_subject": "Using unnest function on multi-dimensional array." }, { "msg_contents": "Hello\n\n2009/10/12 Nimesh Satam <[email protected]>:\n> Hi,\n>\n> Can anybody highlight how to use unnest function from postgres 8.4 on\n> multi-dimensional array?\n>\n> Below is the sample table structure:\n>\n> Table \"public.multi_array_test\"\n>  Column  |   Type   | Modifiers\n> ---------+----------+-----------\n>  id      | integer  |\n>  user_id | bigint[] |\n>\n> Sample data:\n>\n>   1 | {{3567559397,0},{3020933367,1},{2479094216,2},{3310282955,3}}\n>\n> Regards,\n> Nimesh.\n>\n\nuse generate_subscripts\n\npostgres=#\ncreate or replace function unnest2(anyarray)\nreturns setof anyelement as $$\nselect $1[i][j]\n from generate_subscripts($1,1) g1(i),\n generate_subscripts($1,2) g2(j);\n$$ language sql immutable;\n\npostgres=# select * from unnest2(array[[1,2],[3,4]]);\n unnest2\n---------\n 1\n 2\n 3\n 4\n(4 rows)\n\nregards\nPavel Stehule\n", "msg_date": "Mon, 12 Oct 2009 10:57:34 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using unnest function on multi-dimensional array." } ]
[ { "msg_contents": "In the below query both table has less than 1 million data. Can u tell me\nthe reason of this plan?\nwhy its takin extensive cost , seq scan and sorting?? wat is Materialize?\n\nselect 1 from service_detail\nleft join non_service_detail on non_service_detail_service_id =\nservice_detail.service_detail_id\n\n\n\nMerge Left Join (cost=62451.86..67379.08 rows=286789 width=0)\n Merge Cond: (service_detail.service_detail_id =\nnon_service_detail.non_service_detail_service_id)\n -> Sort (cost=18610.57..18923.27 rows=125077 width=8)\n Sort Key: service_detail.service_detail_id\n -> Seq Scan on service_detail (cost=0.00..6309.77 rows=125077\nwidth=8)\n -> Materialize (cost=43841.28..47426.15 rows=286789 width=8)\n -> Sort (cost=43841.28..44558.26 rows=286789 width=8)\n Sort Key: non_service_detail.non_service_detail_service_id\n -> Seq Scan on non_service_detail (cost=0.00..13920.89\nrows=286789 width=8)\n\nThanks,\nArvind S\n\n\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison\n\nIn the below query both table has less than 1 million data. Can u tell me the reason of this plan?why its takin extensive cost , seq scan and sorting?? wat is Materialize?select 1 from  service_detailleft join non_service_detail on non_service_detail_service_id = service_detail.service_detail_id\nMerge Left Join  (cost=62451.86..67379.08 rows=286789 width=0)  Merge Cond: (service_detail.service_detail_id = non_service_detail.non_service_detail_service_id)  ->  Sort  (cost=18610.57..18923.27 rows=125077 width=8)\n\n        Sort Key: service_detail.service_detail_id        ->  Seq Scan on service_detail  (cost=0.00..6309.77 rows=125077 width=8)  ->  Materialize  (cost=43841.28..47426.15 rows=286789 width=8)        ->  Sort  (cost=43841.28..44558.26 rows=286789 width=8)\n\n              Sort Key: non_service_detail.non_service_detail_service_id              ->  Seq Scan on non_service_detail  (cost=0.00..13920.89 rows=286789 width=8)Thanks,Arvind S\n\n\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison", "msg_date": "Mon, 12 Oct 2009 16:51:27 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance" }, { "msg_contents": "On Mon, Oct 12, 2009 at 12:21 PM, S Arvind <[email protected]> wrote:\n\n> In the below query both table has less than 1 million data. Can u tell me\n> the reason of this plan?\n> why its takin extensive cost , seq scan and sorting?? wat is Materialize?\n>\n> select 1 from service_detail\n> left join non_service_detail on non_service_detail_service_id =\n> service_detail.service_detail_id\n>\n>\n>\n> Merge Left Join (cost=62451.86..67379.08 rows=286789 width=0)\n> Merge Cond: (service_detail.service_detail_id =\n> non_service_detail.non_service_detail_service_id)\n> -> Sort (cost=18610.57..18923.27 rows=125077 width=8)\n> Sort Key: service_detail.service_detail_id\n> -> Seq Scan on service_detail (cost=0.00..6309.77 rows=125077\n> width=8)\n> -> Materialize (cost=43841.28..47426.15 rows=286789 width=8)\n> -> Sort (cost=43841.28..44558.26 rows=286789 width=8)\n> Sort Key: non_service_detail.non_service_detail_service_id\n> -> Seq Scan on non_service_detail (cost=0.00..13920.89\n> rows=286789 width=8)\n>\n> A) it is a left join, meaning - everything is pulled from left side,\nB) there are no conditions, so ... ... everything is pulled again from left\nside.\n\n\n\n\n-- \nGJ\n\nOn Mon, Oct 12, 2009 at 12:21 PM, S Arvind <[email protected]> wrote:\nIn the below query both table has less than 1 million data. Can u tell me the reason of this plan?why its takin extensive cost , seq scan and sorting?? wat is Materialize?select 1 from  service_detailleft join non_service_detail on non_service_detail_service_id = service_detail.service_detail_id\nMerge Left Join  (cost=62451.86..67379.08 rows=286789 width=0)  Merge Cond: (service_detail.service_detail_id = non_service_detail.non_service_detail_service_id)  ->  Sort  (cost=18610.57..18923.27 rows=125077 width=8)\n\n\n        Sort Key: service_detail.service_detail_id        ->  Seq Scan on service_detail  (cost=0.00..6309.77 rows=125077 width=8)  ->  Materialize  (cost=43841.28..47426.15 rows=286789 width=8)        ->  Sort  (cost=43841.28..44558.26 rows=286789 width=8)\n\n\n              Sort Key: non_service_detail.non_service_detail_service_id              ->  Seq Scan on non_service_detail  (cost=0.00..13920.89 rows=286789 width=8)A) it is a left join, meaning - everything is pulled from left side, \nB) there are no conditions, so ... ... everything is pulled again from left side. -- GJ", "msg_date": "Mon, 12 Oct 2009 13:30:12 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "I can understand left join, actually can any one tell me why sort operation\nis carried out and wat Materialize means...\nCan anyone explain me the mentioned plan with reason(s)?\n\n\n-Arvind S\n\n\n2009/10/12 Grzegorz Jaśkiewicz <[email protected]>\n\n>\n>\n> On Mon, Oct 12, 2009 at 12:21 PM, S Arvind <[email protected]> wrote:\n>\n>> In the below query both table has less than 1 million data. Can u tell me\n>> the reason of this plan?\n>> why its takin extensive cost , seq scan and sorting?? wat is Materialize?\n>>\n>> select 1 from service_detail\n>> left join non_service_detail on non_service_detail_service_id =\n>> service_detail.service_detail_id\n>>\n>>\n>>\n>> Merge Left Join (cost=62451.86..67379.08 rows=286789 width=0)\n>> Merge Cond: (service_detail.service_detail_id =\n>> non_service_detail.non_service_detail_service_id)\n>> -> Sort (cost=18610.57..18923.27 rows=125077 width=8)\n>> Sort Key: service_detail.service_detail_id\n>> -> Seq Scan on service_detail (cost=0.00..6309.77 rows=125077\n>> width=8)\n>> -> Materialize (cost=43841.28..47426.15 rows=286789 width=8)\n>> -> Sort (cost=43841.28..44558.26 rows=286789 width=8)\n>> Sort Key: non_service_detail.non_service_detail_service_id\n>> -> Seq Scan on non_service_detail (cost=0.00..13920.89\n>> rows=286789 width=8)\n>>\n>> A) it is a left join, meaning - everything is pulled from left side,\n> B) there are no conditions, so ... ... everything is pulled again from left\n> side.\n>\n>\n>\n>\n> --\n> GJ\n>\n\nI can understand left join, actually can any one tell me why sort operation is carried out and wat Materialize means...Can anyone explain me the mentioned plan with reason(s)?-Arvind S\n\n2009/10/12 Grzegorz Jaśkiewicz <[email protected]>\nOn Mon, Oct 12, 2009 at 12:21 PM, S Arvind <[email protected]> wrote:\n\n\nIn the below query both table has less than 1 million data. Can u tell me the reason of this plan?why its takin extensive cost , seq scan and sorting?? wat is Materialize?select 1 from  service_detailleft join non_service_detail on non_service_detail_service_id = service_detail.service_detail_id\nMerge Left Join  (cost=62451.86..67379.08 rows=286789 width=0)  Merge Cond: (service_detail.service_detail_id = non_service_detail.non_service_detail_service_id)  ->  Sort  (cost=18610.57..18923.27 rows=125077 width=8)\n\n\n\n\n        Sort Key: service_detail.service_detail_id        ->  Seq Scan on service_detail  (cost=0.00..6309.77 rows=125077 width=8)  ->  Materialize  (cost=43841.28..47426.15 rows=286789 width=8)        ->  Sort  (cost=43841.28..44558.26 rows=286789 width=8)\n\n\n\n\n              Sort Key: non_service_detail.non_service_detail_service_id              ->  Seq Scan on non_service_detail  (cost=0.00..13920.89 rows=286789 width=8)A) it is a left join, meaning - everything is pulled from left side, \n\n\nB) there are no conditions, so ... ... everything is pulled again from left side. -- GJ", "msg_date": "Mon, 12 Oct 2009 18:09:39 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" }, { "msg_contents": "On Mon, 12 Oct 2009, S Arvind wrote:\n> I can understand left join, actually can any one tell me why sort operation is carried\n> out and wat Materialize means...\n> Can anyone explain me the mentioned plan with reason(s)?\n\n> Merge Left Join  (cost=62451.86..67379.08 rows=286789 width=0)\n>   Merge Cond: (a.id = b.id)\n>   ->  Sort  (cost=18610.57..18923.27 rows=125077 width=8)\n>         Sort Key: a.id\n>         ->  Seq Scan on a  (cost=0.00..6309.77 rows=125077 width=8)\n>   ->  Materialize  (cost=43841.28..47426.15 rows=286789 width=8)\n>         ->  Sort  (cost=43841.28..44558.26 rows=286789 width=8)\n>             Sort Key: b.id\n>            ->  Seq Scan on b (cost=0.00..13920.89 rows=286789 width=8)\n\nThis is a merge join. A merge join joins together two streams of data, \nwhere both streams are sorted, by placing the two streams side by side and \nadvancing through both streams finding matching rows. The algorithm can \nuse a pointer to a position in both of the streams, and advance the \npointer of the stream that has the earlier value according to the sort \norder, and therefore get all the matches.\n\nYou are performing a query over the whole of both of the tables, so the \ncheapest way to obtain a sorted stream of data is to do a full sequential \nscan of the whole table, bring it into memory, and sort it. An alternative \nwould be to follow a B-tree index if one was available on the correct \ncolumn, but that is usually more expensive unless the table is clustered \non the index or only a small portion of the table is to be read. If you \nhad put a \"LIMIT 10\" clause on the end of the query and had such an index, \nit would probably switch to that strategy instead.\n\nThe materialise step is effectively a buffer that allows one of the \nstreams to be rewound cheaply, which will be necessary if there are \nmultiple rows with the same value.\n\nDoes that answer your question?\n\nMatthew\n\n-- \n The only secure computer is one that's unplugged, locked in a safe,\n and buried 20 feet under the ground in a secret location...and i'm not\n even too sure about that one. --Dennis Huges, FBI", "msg_date": "Mon, 12 Oct 2009 14:01:55 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "Thanks very much Matthew , its more then my expectation...\n\nWithout changing the query is there any way to optimize it, like by changing\nthe pg configuration for handling these kind queries?\n\n-Arvind S\n\n\nOn Mon, Oct 12, 2009 at 6:31 PM, Matthew Wakeling <[email protected]>wrote:\n\n> On Mon, 12 Oct 2009, S Arvind wrote:\n>\n>> I can understand left join, actually can any one tell me why sort\n>> operation is carried\n>> out and wat Materialize means...\n>> Can anyone explain me the mentioned plan with reason(s)?\n>>\n>\n> Merge Left Join (cost=62451.86..67379.08 rows=286789 width=0)\n>> Merge Cond: (a.id = b.id)\n>> -> Sort (cost=18610.57..18923.27 rows=125077 width=8)\n>> Sort Key: a.id\n>> -> Seq Scan on a (cost=0.00..6309.77 rows=125077 width=8)\n>> -> Materialize (cost=43841.28..47426.15 rows=286789 width=8)\n>> -> Sort (cost=43841.28..44558.26 rows=286789 width=8)\n>> Sort Key: b.id\n>> -> Seq Scan on b (cost=0.00..13920.89 rows=286789 width=8)\n>>\n>\n> This is a merge join. A merge join joins together two streams of data,\n> where both streams are sorted, by placing the two streams side by side and\n> advancing through both streams finding matching rows. The algorithm can use\n> a pointer to a position in both of the streams, and advance the pointer of\n> the stream that has the earlier value according to the sort order, and\n> therefore get all the matches.\n>\n> You are performing a query over the whole of both of the tables, so the\n> cheapest way to obtain a sorted stream of data is to do a full sequential\n> scan of the whole table, bring it into memory, and sort it. An alternative\n> would be to follow a B-tree index if one was available on the correct\n> column, but that is usually more expensive unless the table is clustered on\n> the index or only a small portion of the table is to be read. If you had put\n> a \"LIMIT 10\" clause on the end of the query and had such an index, it would\n> probably switch to that strategy instead.\n>\n> The materialise step is effectively a buffer that allows one of the streams\n> to be rewound cheaply, which will be necessary if there are multiple rows\n> with the same value.\n>\n> Does that answer your question?\n>\n> Matthew\n>\n> --\n> The only secure computer is one that's unplugged, locked in a safe,\n> and buried 20 feet under the ground in a secret location...and i'm not\n> even too sure about that one. --Dennis Huges, FBI\n\nThanks very much Matthew , its more then my expectation...Without changing the query is there any way to optimize it, like by changing the pg configuration for handling these kind queries?-Arvind S\nOn Mon, Oct 12, 2009 at 6:31 PM, Matthew Wakeling <[email protected]> wrote:\nOn Mon, 12 Oct 2009, S Arvind wrote:\n\nI can understand left join, actually can any one tell me why sort operation is carried\nout and wat Materialize means...\nCan anyone explain me the mentioned plan with reason(s)?\n\n\n\n Merge Left Join  (cost=62451.86..67379.08 rows=286789 width=0)\n     Merge Cond: (a.id = b.id)\n     ->  Sort  (cost=18610.57..18923.27 rows=125077 width=8)\n         Sort Key: a.id\n         ->  Seq Scan on a  (cost=0.00..6309.77 rows=125077 width=8)\n     ->  Materialize  (cost=43841.28..47426.15 rows=286789 width=8)\n         ->  Sort  (cost=43841.28..44558.26 rows=286789 width=8)\n             Sort Key: b.id\n             ->  Seq Scan on b (cost=0.00..13920.89 rows=286789 width=8)\n\n\nThis is a merge join. A merge join joins together two streams of data, where both streams are sorted, by placing the two streams side by side and advancing through both streams finding matching rows. The algorithm can use a pointer to a position in both of the streams, and advance the pointer of the stream that has the earlier value according to the sort order, and therefore get all the matches.\n\nYou are performing a query over the whole of both of the tables, so the cheapest way to obtain a sorted stream of data is to do a full sequential scan of the whole table, bring it into memory, and sort it. An alternative would be to follow a B-tree index if one was available on the correct column, but that is usually more expensive unless the table is clustered on the index or only a small portion of the table is to be read. If you had put a \"LIMIT 10\" clause on the end of the query and had such an index, it would probably switch to that strategy instead.\n\nThe materialise step is effectively a buffer that allows one of the streams to be rewound cheaply, which will be necessary if there are multiple rows with the same value.\n\nDoes that answer your question?\n\nMatthew\n\n-- \nThe only secure computer is one that's unplugged, locked in a safe,\nand buried 20 feet under the ground in a secret location...and i'm not\neven too sure about that one.                         --Dennis Huges, FBI", "msg_date": "Mon, 12 Oct 2009 18:53:22 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" }, { "msg_contents": "btw, what's the version of db ?\nwhat's the work_mem setting ?\n\ntry setting work_mem to higher value. As postgresql will fallback to disc\nsorting if the content doesn't fit in work_mem, which it probably doesn't\n(8.4+ show the memory usage for sorting, which your explain doesn't have).\n\nbtw, what's the version of db ?what's the work_mem setting ?try setting work_mem to higher value. As postgresql will fallback to disc sorting if the content doesn't fit in work_mem, which it probably doesn't (8.4+ show the memory usage for sorting, which your explain doesn't have).", "msg_date": "Mon, 12 Oct 2009 14:29:13 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "On Mon, 12 Oct 2009, Grzegorz Ja�kiewicz wrote:\n> try setting work_mem to higher value. As postgresql will fallback to disc sorting if the\n> content doesn't fit in work_mem, which it probably doesn't (8.4+ show the memory usage\n> for sorting, which your explain doesn't have).\n\nFor reference, here's the EXPLAIN:\n\n> Merge Left Join� (cost=62451.86..67379.08 rows=286789 width=0)\n> � Merge Cond: (a.id = b.id)\n> � ->� Sort� (cost=18610.57..18923.27 rows=125077 width=8)\n> ������� Sort Key: a.id\n> ������� ->� Seq Scan on a� (cost=0.00..6309.77 rows=125077 width=8)\n> � ->� Materialize� (cost=43841.28..47426.15 rows=286789 width=8)\n> ������� ->� Sort� (cost=43841.28..44558.26 rows=286789 width=8)\n> ����������� Sort Key: b.id\n> ���������� ->� Seq Scan on b (cost=0.00..13920.89 rows=286789 width=8)\n\nThis is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE, \nit would show how much memory was used, and whether it was a disc sort or \nan in-memory sort. As it is only an EXPLAIN, the query hasn't actually \nbeen run, and we have no information about whether the sort would be \nperformed on disc or not.\n\nMatthew\n\n-- \n Hi! You have reached 555-0129. None of us are here to answer the phone and \n the cat doesn't have opposing thumbs, so his messages are illegible. Please \n leave your name and message after the beep ...", "msg_date": "Mon, 12 Oct 2009 14:36:59 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "2009/10/12 Matthew Wakeling <[email protected]>\n\n> This is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE,\n> it would show how much memory was used, and whether it was a disc sort or an\n> in-memory sort. As it is only an EXPLAIN, the query hasn't actually been\n> run, and we have no information about whether the sort would be performed on\n> disc or not.\n>\n\ntrue, I was looking at it as if it was explain analyze output :)\nsorry.\n\n-- \nGJ\n\n2009/10/12 Matthew Wakeling <[email protected]>\nThis is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE, it would show how much memory was used, and whether it was a disc sort or an in-memory sort. As it is only an EXPLAIN, the query hasn't actually been run, and we have no information about whether the sort would be performed on disc or not.\n true, I was looking at it as if it was explain analyze output :)sorry.-- GJ", "msg_date": "Mon, 12 Oct 2009 14:40:14 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "Sorry guys, i sent the required plan....\n\n\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=62422.81..67345.85 rows=286487 width=0) (actual\ntime=1459.355..2538.538 rows=325998 loops=1)\n Merge Cond: (service_detail.service_detail_id =\nnon_service_detail.non_service_detail_service_id)\n -> Sort (cost=*18617*.60..18930.47 rows=125146 width=8) (actual\ntime=425.115..560.807 rows=125146 loops=1)\n Sort Key: service_detail.service_detail_id\n Sort Method: external merge Disk: 2912kB\n -> Seq Scan on service_detail (cost=0.00..6310.46 rows=125146\nwidth=8) (actual time=0.056..114.925 rows=125146 loops=1)\n -> Materialize (cost=43805.21..47386.30 rows=286487 width=8) (actual\ntime=1034.220..1617.313 rows=286491 loops=1)\n -> Sort (cost=*43805*.21..44521.43 rows=286487 width=8) (actual\ntime=1034.204..1337.708 rows=286491 loops=1)\n Sort Key: non_service_detail.non_service_detail_service_id\n Sort Method: external merge Disk: 6720kB\n -> Seq Scan on non_service_detail (cost=0.00..13917.87\nrows=286487 width=8) (actual time=0.063..248.950 rows=286491 loops=1)\n Total runtime: 2650.763 ms\n(12 rows)\n\n\n\n2009/10/12 Matthew Wakeling <[email protected]>\n\n> On Mon, 12 Oct 2009, Grzegorz Jaśkiewicz wrote:\n>\n>> try setting work_mem to higher value. As postgresql will fallback to disc\n>> sorting if the\n>> content doesn't fit in work_mem, which it probably doesn't (8.4+ show the\n>> memory usage\n>> for sorting, which your explain doesn't have).\n>>\n>\n> For reference, here's the EXPLAIN:\n>\n> Merge Left Join (cost=62451.86..67379.08 rows=286789 width=0)\n>> Merge Cond: (a.id = b.id)\n>> -> Sort (cost=18610.57..18923.27 rows=125077 width=8)\n>> Sort Key: a.id\n>> -> Seq Scan on a (cost=0.00..6309.77 rows=125077 width=8)\n>> -> Materialize (cost=43841.28..47426.15 rows=286789 width=8)\n>> -> Sort (cost=43841.28..44558.26 rows=286789 width=8)\n>> Sort Key: b.id\n>> -> Seq Scan on b (cost=0.00..13920.89 rows=286789 width=8)\n>>\n>\n> This is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE,\n> it would show how much memory was used, and whether it was a disc sort or an\n> in-memory sort. As it is only an EXPLAIN, the query hasn't actually been\n> run, and we have no information about whether the sort would be performed on\n> disc or not.\n>\n> Matthew\n>\n> --\n> Hi! You have reached 555-0129. None of us are here to answer the phone and\n> the cat doesn't have opposing thumbs, so his messages are illegible. Please\n> leave your name and message after the beep ...\n\nSorry guys, i sent the required plan....                                                                  QUERY PLAN                                                                  ----------------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Left Join  (cost=62422.81..67345.85 rows=286487 width=0) (actual time=1459.355..2538.538 rows=325998 loops=1)   Merge Cond: (service_detail.service_detail_id = non_service_detail.non_service_detail_service_id)\n\n   ->  Sort  (cost=18617.60..18930.47 rows=125146 width=8) (actual time=425.115..560.807 rows=125146 loops=1)         Sort Key: service_detail.service_detail_id         Sort Method:  external merge  Disk: 2912kB\n\n         ->  Seq Scan on service_detail  (cost=0.00..6310.46 rows=125146 width=8) (actual time=0.056..114.925 rows=125146 loops=1)   ->  Materialize  (cost=43805.21..47386.30 rows=286487 width=8) (actual time=1034.220..1617.313 rows=286491 loops=1)\n\n         ->  Sort  (cost=43805.21..44521.43 rows=286487 width=8) (actual time=1034.204..1337.708 rows=286491 loops=1)               Sort Key: non_service_detail.non_service_detail_service_id               Sort Method:  external merge  Disk: 6720kB\n\n               ->  Seq Scan on non_service_detail  (cost=0.00..13917.87 rows=286487 width=8) (actual time=0.063..248.950 rows=286491 loops=1) Total runtime: 2650.763 ms(12 rows)\n\n2009/10/12 Matthew Wakeling <[email protected]>\nOn Mon, 12 Oct 2009, Grzegorz Jaśkiewicz wrote:\n\ntry setting work_mem to higher value. As postgresql will fallback to disc sorting if the\ncontent doesn't fit in work_mem, which it probably doesn't (8.4+ show the memory usage\nfor sorting, which your explain doesn't have).\n\n\nFor reference, here's the EXPLAIN:\n\n\n Merge Left Join  (cost=62451.86..67379.08 rows=286789 width=0)\n     Merge Cond: (a.id = b.id)\n     ->  Sort  (cost=18610.57..18923.27 rows=125077 width=8)\n         Sort Key: a.id\n         ->  Seq Scan on a  (cost=0.00..6309.77 rows=125077 width=8)\n     ->  Materialize  (cost=43841.28..47426.15 rows=286789 width=8)\n         ->  Sort  (cost=43841.28..44558.26 rows=286789 width=8)\n             Sort Key: b.id\n             ->  Seq Scan on b (cost=0.00..13920.89 rows=286789 width=8)\n\n\nThis is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE, it would show how much memory was used, and whether it was a disc sort or an in-memory sort. As it is only an EXPLAIN, the query hasn't actually been run, and we have no information about whether the sort would be performed on disc or not.\n\nMatthew\n\n-- \nHi! You have reached 555-0129. None of us are here to answer the phone and the cat doesn't have opposing thumbs, so his messages are illegible. Please leave your name and message after the beep ...", "msg_date": "Mon, 12 Oct 2009 20:15:47 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" }, { "msg_contents": "Thanks Grzegorz,\n But work memory is for each process (connection) rt? so if i keep\nmore then 10MB will not affect the overall performance ?\n\nArvind S\n\n\n2009/10/12 Grzegorz Jaśkiewicz <[email protected]>\n\n> btw, what's the version of db ?\n> what's the work_mem setting ?\n>\n> try setting work_mem to higher value. As postgresql will fallback to disc\n> sorting if the content doesn't fit in work_mem, which it probably doesn't\n> (8.4+ show the memory usage for sorting, which your explain doesn't have).\n>\n>\n\nThanks Grzegorz,        But work memory is for each process (connection) rt? so if i keep more then 10MB will not affect the overall performance ?Arvind S2009/10/12 Grzegorz Jaśkiewicz <[email protected]>\nbtw, what's the version of db ?what's the work_mem setting ?try setting work_mem to higher value. As postgresql will fallback to disc sorting if the content doesn't fit in work_mem, which it probably doesn't (8.4+ show the memory usage for sorting, which your explain doesn't have).", "msg_date": "Mon, 12 Oct 2009 20:22:53 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance" }, { "msg_contents": "2009/10/12 S Arvind <[email protected]>\n\n> Thanks Grzegorz,\n> But work memory is for each process (connection) rt? so if i keep\n> more then 10MB will not affect the overall performance ?\n>\nit will. But the memory is only allocated when needed.\nYou can always set it before running that particular query, and than put it\nback to default value.\njust use SET work_mem=64MB\n\nMind you , postgresql requires more memory to sort same set of data on disc\nthan on memory. Your explain analyze indicates, that it used 2912kB , which\nmeans your work_mem value is set to some ridiculously low value. Put it up\nto 8MB or something, and retry.\n\n\n\n-- \nGJ\n\n2009/10/12 S Arvind <[email protected]>\nThanks Grzegorz,        But work memory is for each process (connection) rt? so if i keep more then 10MB will not affect the overall performance ?it will. But the memory is only allocated when needed. \nYou can always set it before running that particular query, and than put it back to default value.  just use SET work_mem=64MBMind you , postgresql requires more memory to sort same set of data on disc than on memory. Your explain analyze indicates, that it used 2912kB , which means your work_mem value is set to some ridiculously low value. Put it up to 8MB or something, and retry.\n-- GJ", "msg_date": "Mon, 12 Oct 2009 16:10:51 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" } ]
[ { "msg_contents": "We have performance problem with query on partitioned table when query\nuse order by and we want to use first/last rows from result set.\nMore detail description:\nWe have big table where each row is one telephone call (CDR).\nDefinitnion of this table look like this:\nCREATE TABLE accounting.cdr_full_partitioned (it is parrent table)\n(\n cdr_id bigint NOT NULL,\n id_crx_group_from bigint,\t\t\t\t\t\t-- identifier of user\n start_time_invite timestamp with time zone, -- start call time\n call_status VARCHAR\t\t\t\t\t\t\t-- FINF-call finished, FINC-call\nunfinished\n ..some extra data..\n)\n\nWe creating 12 partitions using 'start_time_invite' column, simply we\ncreate one partition for each month. We create costraints like this:\nALTER TABLE accounting.cdr_y2009_m09\n ADD CONSTRAINT y2009m09 CHECK (start_time_invite >= '2009-09-01\n00:00:00+02'::timestamp with time zone AND start_time_invite <\n'2009-10-01 00:00:00+02'::timestamp with time zone);\n\nand we define necessery indexes of course\n\nCREATE INDEX cdr_full_partitioned_y2009_m09_id_crx_group_to_key1\n ON accounting.cdr_full_partitioned_y2009_m09\n USING btree\n (id_crx_group_from, start_time_invite, call_status);\n\n\nThe problem appears when we want to select calls for specified user\nwith specified call_Status e.g:\n SELECT * FROM accounting.cdr_full_partitioned\n WHERE\n id_crx_group_from='522921' AND\n call_status='FINS' AND\n start_time_invite>='2009-09-28 00:00:00+02' AND\n start_time_invite<'2009-10-12 23:59:59+02' AND\n ORDER BY start_time_invite LIMIT '100' OFFSET 0\n\nyou can see execution plan http://szymanskich.net/pub/postgres/full.jpg\n as you see 20000 rows were selected and after were sorted what take\nvery long about 30-40s and after sorting it limit\nresult to 100 rows.\n\nUsing table without partition\n\n SELECT * FROM accounting.cdr_full WHERE\n(id_crx_group_from='522921') AND (\n call_status='FINS' ) AND (start_time_invite>='2009-01-28\n00:00:00+02')\n AND (start_time_invite<'2009-10-12 23:59:59+02') ORDER BY\nstart_time_invite LIMIT '100' OFFSET 0\n\nexecution plan is very simple\n\"Limit (cost=0.00..406.40 rows=100 width=456)\"\n\" -> Index Scan using\ncdr_full_crx_group_from_start_time_invite_status_ind on cdr_full\n(cost=0.00..18275.76 rows=4497 width=456)\"\n\" Index Cond: ((id_crx_group_from = 522921::bigint) AND\n(start_time_invite >= '2009-01-27 23:00:00+01'::timestamp with time\nzone) AND (start_time_invite < '2009-10-12 23:59:59+02'::timestamp\nwith time zone) AND ((call_status)::text = 'FINS'::text))\"\n\nit use index to fetch first 100 rows and it is super fast and take\nless than 0.5s. There is no rows sorting!\nI've tried to execute the same query on one partition:\n SELECT * FROM accounting.cdr_full_partitioned_y2009_m09\n WHERE (id_crx_group_from='509498') AND (\n call_status='FINS' ) AND (start_time_invite>='2009-09-01\n00:00:00+02')\n AND (start_time_invite<'2009-10-12 23:59:59+02')\n\nYou can see execution plan http://szymanskich.net/pub/postgres/ononeprtition.jpg\nand query is superfast because there is no sorting. The question is\nhow to speed up query when we use partitioning? So far I have not\nfound solution. I'm wonder how do you solve problems\nwhen result from partition must be sorted and after we want to display\nonly first/last 100 rows?\nWe can use own partitioning mechanism and partitioning data using\nid_crx_group_from and create dynamic query (depending on\nid_crx_group_from we can execute query on one partition) but it is not\nmost beautiful solution.\n\nMichal Szymanski\nhttp://blog.szymanskich.net\nhttp://techblog.freeconet.pl\n", "msg_date": "Mon, 12 Oct 2009 07:14:37 -0700 (PDT)", "msg_from": "Michal Szymanski <[email protected]>", "msg_from_op": true, "msg_subject": "Performance with sorting and LIMIT on partitioned table" }, { "msg_contents": "On Mon, Oct 12, 2009 at 10:14 AM, Michal Szymanski <[email protected]> wrote:\n> We have performance problem with query on partitioned table when query\n> use order by and we want to use first/last rows from result set.\n> More detail description:\n> We have big table where each row is one telephone call (CDR).\n> Definitnion of this table look like this:\n> CREATE TABLE accounting.cdr_full_partitioned  (it is parrent table)\n> (\n>  cdr_id bigint NOT NULL,\n>  id_crx_group_from bigint,                                             -- identifier of user\n>  start_time_invite timestamp with time zone,   -- start call time\n>  call_status VARCHAR                                                   -- FINF-call finished, FINC-call\n> unfinished\n>  ..some extra data..\n> )\n>\n> We creating 12 partitions using 'start_time_invite' column, simply we\n> create one partition for each month. We create costraints like this:\n> ALTER TABLE accounting.cdr_y2009_m09\n>  ADD CONSTRAINT y2009m09 CHECK (start_time_invite >= '2009-09-01\n> 00:00:00+02'::timestamp with time zone AND start_time_invite <\n> '2009-10-01 00:00:00+02'::timestamp with time zone);\n>\n> and we define necessery indexes of course\n>\n> CREATE INDEX cdr_full_partitioned_y2009_m09_id_crx_group_to_key1\n>  ON accounting.cdr_full_partitioned_y2009_m09\n>  USING btree\n>  (id_crx_group_from, start_time_invite, call_status);\n>\n>\n> The problem appears when we want to select calls for specified user\n> with specified call_Status e.g:\n>  SELECT * FROM accounting.cdr_full_partitioned\n>   WHERE\n>   id_crx_group_from='522921' AND\n>   call_status='FINS' AND\n>   start_time_invite>='2009-09-28 00:00:00+02' AND\n>   start_time_invite<'2009-10-12 23:59:59+02'   AND\n>  ORDER BY start_time_invite  LIMIT '100' OFFSET 0\n>\n> you can see execution plan  http://szymanskich.net/pub/postgres/full.jpg\n>  as you see 20000 rows were selected and after were sorted what take\n> very long about 30-40s and after sorting it limit\n> result to 100 rows.\n>\n> Using table without partition\n>\n>  SELECT * FROM accounting.cdr_full    WHERE\n> (id_crx_group_from='522921') AND (\n>   call_status='FINS' ) AND (start_time_invite>='2009-01-28\n> 00:00:00+02')\n>   AND (start_time_invite<'2009-10-12 23:59:59+02') ORDER BY\n> start_time_invite  LIMIT '100' OFFSET 0\n>\n> execution plan is very simple\n> \"Limit  (cost=0.00..406.40 rows=100 width=456)\"\n> \"  ->  Index Scan using\n> cdr_full_crx_group_from_start_time_invite_status_ind on cdr_full\n> (cost=0.00..18275.76 rows=4497 width=456)\"\n> \"        Index Cond: ((id_crx_group_from = 522921::bigint) AND\n> (start_time_invite >= '2009-01-27 23:00:00+01'::timestamp with time\n> zone) AND (start_time_invite < '2009-10-12 23:59:59+02'::timestamp\n> with time zone) AND ((call_status)::text = 'FINS'::text))\"\n>\n> it use index to fetch first 100 rows and it is super fast and take\n> less than 0.5s. There is no rows sorting!\n> I've tried to execute the same query on one partition:\n>  SELECT * FROM accounting.cdr_full_partitioned_y2009_m09\n>  WHERE (id_crx_group_from='509498') AND (\n>   call_status='FINS' ) AND (start_time_invite>='2009-09-01\n> 00:00:00+02')\n>   AND (start_time_invite<'2009-10-12 23:59:59+02')\n>\n> You can see execution plan http://szymanskich.net/pub/postgres/ononeprtition.jpg\n> and query is superfast because there is no sorting. The question is\n> how to speed up query when we use partitioning? So far I have not\n> found solution. I'm wonder how do you solve problems\n> when result from partition must be sorted and after we want to display\n> only first/last 100 rows?\n> We can use own partitioning mechanism and partitioning data using\n> id_crx_group_from and create dynamic query (depending on\n> id_crx_group_from we can execute query on one partition) but it is not\n> most beautiful solution.\n\nYeah - unfortunately the query planner is not real smart about\npartitioned tables yet. I can't make anything of the JPG link you\nposted. Can you post the EXPLAIN ANALYZE output for the case that is\nslow? What PG version is this?\n\n...Robert\n", "msg_date": "Sun, 18 Oct 2009 20:52:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with sorting and LIMIT on partitioned table" }, { "msg_contents": "> On Mon, Oct 12, 2009 at 10:14 AM, Michal Szymanski<[email protected]> wrote:\n> \n>> We have performance problem with query on partitioned table when query\n>> use order by and we want to use first/last rows from result set.\n>> More detail description:\n>> We have big table where each row is one telephone call (CDR).\n>> Definitnion of this table look like this:\n>> CREATE TABLE accounting.cdr_full_partitioned (it is parrent table)\n>> (\n>> cdr_id bigint NOT NULL,\n>> id_crx_group_from bigint, -- identifier of user\n>> start_time_invite timestamp with time zone, -- start call time\n>> call_status VARCHAR -- FINF-call finished, FINC-call\n>> unfinished\n>> ..some extra data..\n>> )\n>>\n>> We creating 12 partitions using 'start_time_invite' column, simply we\n>> create one partition for each month. We create costraints like this:\n>> ALTER TABLE accounting.cdr_y2009_m09\n>> ADD CONSTRAINT y2009m09 CHECK (start_time_invite>= '2009-09-01\n>> 00:00:00+02'::timestamp with time zone AND start_time_invite<\n>> '2009-10-01 00:00:00+02'::timestamp with time zone);\n>>\n>> and we define necessery indexes of course\n>>\n>> CREATE INDEX cdr_full_partitioned_y2009_m09_id_crx_group_to_key1\n>> ON accounting.cdr_full_partitioned_y2009_m09\n>> USING btree\n>> (id_crx_group_from, start_time_invite, call_status);\n>>\n>>\n>> The problem appears when we want to select calls for specified user\n>> with specified call_Status e.g:\n>> SELECT * FROM accounting.cdr_full_partitioned\n>> WHERE\n>> id_crx_group_from='522921' AND\n>> call_status='FINS' AND\n>> start_time_invite>='2009-09-28 00:00:00+02' AND\n>> start_time_invite<'2009-10-12 23:59:59+02' AND\n>> ORDER BY start_time_invite LIMIT '100' OFFSET 0\n>>\n>> you can see execution plan http://szymanskich.net/pub/postgres/full.jpg\n>> as you see 20000 rows were selected and after were sorted what take\n>> very long about 30-40s and after sorting it limit\n>> result to 100 rows.\n>>\n>> Using table without partition\n>>\n>> SELECT * FROM accounting.cdr_full WHERE\n>> (id_crx_group_from='522921') AND (\n>> call_status='FINS' ) AND (start_time_invite>='2009-01-28\n>> 00:00:00+02')\n>> AND (start_time_invite<'2009-10-12 23:59:59+02') ORDER BY\n>> start_time_invite LIMIT '100' OFFSET 0\n>>\n>> execution plan is very simple\n>> \"Limit (cost=0.00..406.40 rows=100 width=456)\"\n>> \" -> Index Scan using\n>> cdr_full_crx_group_from_start_time_invite_status_ind on cdr_full\n>> (cost=0.00..18275.76 rows=4497 width=456)\"\n>> \" Index Cond: ((id_crx_group_from = 522921::bigint) AND\n>> (start_time_invite>= '2009-01-27 23:00:00+01'::timestamp with time\n>> zone) AND (start_time_invite< '2009-10-12 23:59:59+02'::timestamp\n>> with time zone) AND ((call_status)::text = 'FINS'::text))\"\n>>\n>> it use index to fetch first 100 rows and it is super fast and take\n>> less than 0.5s. There is no rows sorting!\n>> I've tried to execute the same query on one partition:\n>> SELECT * FROM accounting.cdr_full_partitioned_y2009_m09\n>> WHERE (id_crx_group_from='509498') AND (\n>> call_status='FINS' ) AND (start_time_invite>='2009-09-01\n>> 00:00:00+02')\n>> AND (start_time_invite<'2009-10-12 23:59:59+02')\n>>\n>> You can see execution plan http://szymanskich.net/pub/postgres/ononeprtition.jpg\n>> and query is superfast because there is no sorting. The question is\n>> how to speed up query when we use partitioning? So far I have not\n>> found solution. I'm wonder how do you solve problems\n>> when result from partition must be sorted and after we want to display\n>> only first/last 100 rows?\n>> We can use own partitioning mechanism and partitioning data using\n>> id_crx_group_from and create dynamic query (depending on\n>> id_crx_group_from we can execute query on one partition) but it is not\n>> most beautiful solution.\n>> \n>\n> Yeah - unfortunately the query planner is not real smart about\n> partitioned tables yet. I can't make anything of the JPG link you\n> posted. Can you post the EXPLAIN ANALYZE output for the case that is\n> slow? What PG version is this?\n>\n> ...Robert\n>\n> \nI have a similar, recent thread titled Partitioned Tables and ORDER BY \nwith a decent break down. I think I am hitting the same issue Michal is.\n\nEssentially doing a SELECT against the parent with appropriate \nconstraint columns in the WHERE clause is very fast (uses index scans \nagainst correct child table only) but the moment you add an ORDER BY it \nseems to be merging the parent (an empty table) and the child, sorting \nthe results, and sequential scanning. So it does still scan only the \nappropriate child table in the end but indexes are useless.\n\nUnfortunately the only workaround I can come up with is to query the \npartitioned child tables directly. In my case the partitions are rather \nlarge so the timing difference is 522ms versus 149865ms.\n\n\n\n\n\n\n\n\n\nOn Mon, Oct 12, 2009 at 10:14 AM, Michal Szymanski <[email protected]> wrote:\n \n\nWe have performance problem with query on partitioned table when query\nuse order by and we want to use first/last rows from result set.\nMore detail description:\nWe have big table where each row is one telephone call (CDR).\nDefinitnion of this table look like this:\nCREATE TABLE accounting.cdr_full_partitioned  (it is parrent table)\n(\n cdr_id bigint NOT NULL,\n id_crx_group_from bigint,                                             -- identifier of user\n start_time_invite timestamp with time zone,   -- start call time\n call_status VARCHAR                                                   -- FINF-call finished, FINC-call\nunfinished\n ..some extra data..\n)\n\nWe creating 12 partitions using 'start_time_invite' column, simply we\ncreate one partition for each month. We create costraints like this:\nALTER TABLE accounting.cdr_y2009_m09\n ADD CONSTRAINT y2009m09 CHECK (start_time_invite >= '2009-09-01\n00:00:00+02'::timestamp with time zone AND start_time_invite <\n'2009-10-01 00:00:00+02'::timestamp with time zone);\n\nand we define necessery indexes of course\n\nCREATE INDEX cdr_full_partitioned_y2009_m09_id_crx_group_to_key1\n ON accounting.cdr_full_partitioned_y2009_m09\n USING btree\n (id_crx_group_from, start_time_invite, call_status);\n\n\nThe problem appears when we want to select calls for specified user\nwith specified call_Status e.g:\n SELECT * FROM accounting.cdr_full_partitioned\n  WHERE\n  id_crx_group_from='522921' AND\n  call_status='FINS' AND\n  start_time_invite>='2009-09-28 00:00:00+02' AND\n  start_time_invite<'2009-10-12 23:59:59+02'   AND\n ORDER BY start_time_invite  LIMIT '100' OFFSET 0\n\nyou can see execution plan  http://szymanskich.net/pub/postgres/full.jpg\n as you see 20000 rows were selected and after were sorted what take\nvery long about 30-40s and after sorting it limit\nresult to 100 rows.\n\nUsing table without partition\n\n SELECT * FROM accounting.cdr_full    WHERE\n(id_crx_group_from='522921') AND (\n  call_status='FINS' ) AND (start_time_invite>='2009-01-28\n00:00:00+02')\n  AND (start_time_invite<'2009-10-12 23:59:59+02') ORDER BY\nstart_time_invite  LIMIT '100' OFFSET 0\n\nexecution plan is very simple\n\"Limit  (cost=0.00..406.40 rows=100 width=456)\"\n\"  ->  Index Scan using\ncdr_full_crx_group_from_start_time_invite_status_ind on cdr_full\n(cost=0.00..18275.76 rows=4497 width=456)\"\n\"        Index Cond: ((id_crx_group_from = 522921::bigint) AND\n(start_time_invite >= '2009-01-27 23:00:00+01'::timestamp with time\nzone) AND (start_time_invite < '2009-10-12 23:59:59+02'::timestamp\nwith time zone) AND ((call_status)::text = 'FINS'::text))\"\n\nit use index to fetch first 100 rows and it is super fast and take\nless than 0.5s. There is no rows sorting!\nI've tried to execute the same query on one partition:\n SELECT * FROM accounting.cdr_full_partitioned_y2009_m09\n WHERE (id_crx_group_from='509498') AND (\n  call_status='FINS' ) AND (start_time_invite>='2009-09-01\n00:00:00+02')\n  AND (start_time_invite<'2009-10-12 23:59:59+02')\n\nYou can see execution plan http://szymanskich.net/pub/postgres/ononeprtition.jpg\nand query is superfast because there is no sorting. The question is\nhow to speed up query when we use partitioning? So far I have not\nfound solution. I'm wonder how do you solve problems\nwhen result from partition must be sorted and after we want to display\nonly first/last 100 rows?\nWe can use own partitioning mechanism and partitioning data using\nid_crx_group_from and create dynamic query (depending on\nid_crx_group_from we can execute query on one partition) but it is not\nmost beautiful solution.\n \n\n\nYeah - unfortunately the query planner is not real smart about\npartitioned tables yet. I can't make anything of the JPG link you\nposted. Can you post the EXPLAIN ANALYZE output for the case that is\nslow? What PG version is this?\n\n...Robert\n\n \n\nI have a similar, recent thread titled Partitioned Tables and ORDER BY\nwith a decent break down.  I think I am hitting the same issue Michal\nis. \n\nEssentially doing a SELECT against the parent with appropriate\nconstraint columns in the WHERE clause is very fast (uses index scans\nagainst correct child table only) but the moment you add an ORDER BY it\nseems to be merging the parent (an empty table) and the child, sorting\nthe results, and sequential scanning.  So it does still scan only the\nappropriate child table in the end but indexes are useless.\n\nUnfortunately the only workaround I can come up with is to query the\npartitioned child tables directly.  In my case the partitions are\nrather large so the timing difference is 522ms versus 149865ms.", "msg_date": "Mon, 19 Oct 2009 06:58:15 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with sorting and LIMIT on partitioned table" }, { "msg_contents": "On Mon, Oct 19, 2009 at 6:58 AM, Joe Uhl <[email protected]> wrote:\n> I have a similar, recent thread titled Partitioned Tables and ORDER BY with\n> a decent break down.  I think I am hitting the same issue Michal is.\n>\n> Essentially doing a SELECT against the parent with appropriate constraint\n> columns in the WHERE clause is very fast (uses index scans against correct\n> child table only) but the moment you add an ORDER BY it seems to be merging\n> the parent (an empty table) and the child, sorting the results, and\n> sequential scanning.  So it does still scan only the appropriate child table\n> in the end but indexes are useless.\n>\n> Unfortunately the only workaround I can come up with is to query the\n> partitioned child tables directly.  In my case the partitions are rather\n> large so the timing difference is 522ms versus 149865ms.\n\nThese questions are all solvable depending on what you define\n'solution' as. I would at this point be thinking in terms of wrapping\nthe query in a function using dynamic sql in plpgsql...using some ad\nhoc method of determining which children to hit and awkwardly looping\nthem and enforcing limit, ordering, etc at that level. Yes, it sucks,\nbut it only has to be done for classes of queries constraint exclusion\ncan't handle and you will only handle a couple of cases most likely.\n\nFor this reason, when I set up my partitioning strategies, I always\ntry to divide the data such that you rarely if ever, have to fire\nqueries that have to touch multiple partitions simultaneously.\n\nmerlin\n", "msg_date": "Mon, 19 Oct 2009 22:50:46 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with sorting and LIMIT on partitioned table" }, { "msg_contents": "On Mon, Oct 19, 2009 at 6:58 AM, Joe Uhl <[email protected]> wrote:\n>> I have a similar, recent thread titled Partitioned Tables and ORDER BY with\n>> a decent break down. I think I am hitting the same issue Michal is.\n>>\n>> Essentially doing a SELECT against the parent with appropriate constraint\n>> columns in the WHERE clause is very fast (uses index scans against correct\n>> child table only) but the moment you add an ORDER BY it seems to be merging\n>> the parent (an empty table) and the child, sorting the results, and\n>> sequential scanning. So it does still scan only the appropriate child table\n>> in the end but indexes are useless.\n>>\n>> Unfortunately the only workaround I can come up with is to query the\n>> partitioned child tables directly. In my case the partitions are rather\n>> large so the timing difference is 522ms versus 149865ms.\n>> \n>\n> These questions are all solvable depending on what you define\n> 'solution' as. I would at this point be thinking in terms of wrapping\n> the query in a function using dynamic sql in plpgsql...using some ad\n> hoc method of determining which children to hit and awkwardly looping\n> them and enforcing limit, ordering, etc at that level. Yes, it sucks,\n> but it only has to be done for classes of queries constraint exclusion\n> can't handle and you will only handle a couple of cases most likely.\n>\n> For this reason, when I set up my partitioning strategies, I always\n> try to divide the data such that you rarely if ever, have to fire\n> queries that have to touch multiple partitions simultaneously.\n>\n> merlin\n> \nThis definitely sounds like a workable approach. I am doing something a \nlittle similar on the insert/update side to trick hibernate into writing \ndata correctly into partitioned tables when it only knows about the parent.\n\nFor anyone else hitting this issue and using hibernate my solution on \nthe select side ended up being session-specific hibernate interceptors \nthat rewrite the from clause after hibernate prepares the statement. \nThis seems to be working alright especially since in our case the code, \nwhile not aware of DB partitioning, has the context necessary to select \nthe right partition under the hood.\n\nThankfully we haven't yet had queries that need to hit multiple \npartitions so this works okay without too much logic for now. I suppose \nif I needed to go multi-partition on single queries and wanted to \ncontinue down the hibernate interceptor path I could get more creative \nwith the from clause rewriting and start using UNIONs, or switch to a \nPostgres-level solution like you are describing.\n\n\n\n\n\n\nOn Mon, Oct 19, 2009 at 6:58 AM, Joe Uhl <[email protected]> wrote:\n\n\nI have a similar, recent thread titled Partitioned Tables and ORDER BY with\na decent break down.  I think I am hitting the same issue Michal is.\n\nEssentially doing a SELECT against the parent with appropriate constraint\ncolumns in the WHERE clause is very fast (uses index scans against correct\nchild table only) but the moment you add an ORDER BY it seems to be merging\nthe parent (an empty table) and the child, sorting the results, and\nsequential scanning.  So it does still scan only the appropriate child table\nin the end but indexes are useless.\n\nUnfortunately the only workaround I can come up with is to query the\npartitioned child tables directly.  In my case the partitions are rather\nlarge so the timing difference is 522ms versus 149865ms.\n \n\n\nThese questions are all solvable depending on what you define\n'solution' as. I would at this point be thinking in terms of wrapping\nthe query in a function using dynamic sql in plpgsql...using some ad\nhoc method of determining which children to hit and awkwardly looping\nthem and enforcing limit, ordering, etc at that level. Yes, it sucks,\nbut it only has to be done for classes of queries constraint exclusion\ncan't handle and you will only handle a couple of cases most likely.\n\nFor this reason, when I set up my partitioning strategies, I always\ntry to divide the data such that you rarely if ever, have to fire\nqueries that have to touch multiple partitions simultaneously.\n\nmerlin\n \n\nThis definitely sounds like a workable approach.  I am doing something\na little similar on the insert/update side to trick hibernate into\nwriting data correctly into partitioned tables when it only knows about\nthe parent.\n\nFor anyone else hitting this issue and using hibernate my solution on\nthe select side ended up being session-specific hibernate interceptors\nthat rewrite the from clause after hibernate prepares the statement. \nThis seems to be working alright especially since in our case the code,\nwhile not aware of DB partitioning, has the context necessary to select\nthe right partition under the hood.\n\nThankfully we haven't yet had queries that need to hit multiple\npartitions so this works okay without too much logic for now.  I\nsuppose if I needed to go multi-partition on single queries and wanted\nto continue down the hibernate interceptor path I could get more\ncreative with the from clause rewriting and start using UNIONs, or\nswitch to a Postgres-level solution like you are describing.", "msg_date": "Tue, 20 Oct 2009 06:31:18 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with sorting and LIMIT on partitioned table" } ]
[ { "msg_contents": "Any issues, has it baked long enough, is it time for us 8.3 folks to deal\nwith the pain and upgrade?\n\nAnymore updates regarding 8.4 and slon 1.2 as well, since I usually\nbuild/upgrade both at the same time.\n\nThanks\nTory\n\nAny issues, has it baked long enough, is it time for us 8.3 folks to deal with the pain and upgrade?Anymore updates regarding 8.4 and slon 1.2 as well, since I usually build/upgrade both at the same time.Thanks\nTory", "msg_date": "Mon, 12 Oct 2009 12:06:37 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Are folks running 8.4 in production environments? and 8.4 and slon\n\t1.2?" }, { "msg_contents": "On Mon, Oct 12, 2009 at 1:06 PM, Tory M Blue <[email protected]> wrote:\n> Any issues, has it baked long enough, is it time for us 8.3 folks to deal\n> with the pain and upgrade?\n\nI am running 8.4.1 for my stats and search databases, and it's working fine.\n\n> Anymore updates regarding 8.4 and slon 1.2 as well, since I usually\n> build/upgrade both at the same time.\n\nI don't think 1.2 supports 8.4 just yet, and 2.0.3 or so is still not\nstable enough for production (I had major unexplained outages with it)\nso for now, no 8.4 with slony.\n", "msg_date": "Tue, 13 Oct 2009 01:03:10 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are folks running 8.4 in production environments? and\n\t8.4 and slon 1.2?" }, { "msg_contents": "Tory M Blue schrieb:\n\n> Any issues, has it baked long enough, is it time for us 8.3 folks to deal\n> with the pain and upgrade?\n\nI've upgraded all my databases to 8.4. They pain was not so big, the new \n-j Parameter from pg_restore is fantastic. I really like the new \nfunctions around Pl/PGSQL. All is stable and fast.\n\nGreetings from Germany,\nTorsten\n-- \nhttp://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 \nverschiedenen Datenbanksystemen abstrahiert,\nQueries von Applikationen trennt und automatisch die Query-Ergebnisse \nauswerten kann.\n", "msg_date": "Tue, 13 Oct 2009 09:10:32 +0200", "msg_from": "=?ISO-8859-1?Q?Torsten_Z=FChlsdorff?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are folks running 8.4 in production environments? and\n\t8.4 and slon 1.2?" }, { "msg_contents": "On Tue, Oct 13, 2009 at 01:03:10AM -0600, Scott Marlowe wrote:\n> On Mon, Oct 12, 2009 at 1:06 PM, Tory M Blue <[email protected]> wrote:\n> > Any issues, has it baked long enough, is it time for us 8.3 folks to deal\n> > with the pain and upgrade?\n> \n> I am running 8.4.1 for my stats and search databases, and it's working fine.\n> \n> > Anymore updates regarding 8.4 and slon 1.2 as well, since I usually\n> > build/upgrade both at the same time.\n> \n> I don't think 1.2 supports 8.4 just yet, and 2.0.3 or so is still not\n> stable enough for production (I had major unexplained outages with it)\n> so for now, no 8.4 with slony.\n> \nslony-1.2.17-rc2 works fine with version 8.4 in my limited testing.\nI have not been able to get replication to work reliably with any\ncurrent release of slony-2.x. There was a recent comment that the\nlatest version in CVS has the 2.x bug fixed but I have not had a\nchance to try.\n\nRegards,\nKen\n\n", "msg_date": "Tue, 13 Oct 2009 07:53:55 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are folks running 8.4 in production environments?\n\tand 8.4 and slon 1.2?" }, { "msg_contents": "Torsten Z�hlsdorff wrote:\n> Tory M Blue schrieb:\n>\n>> Any issues, has it baked long enough, is it time for us 8.3 folks to\n>> deal\n>> with the pain and upgrade?\n>\n> I've upgraded all my databases to 8.4. They pain was not so big, the\n> new -j Parameter from pg_restore is fantastic. I really like the new\n> functions around Pl/PGSQL. All is stable and fast.\n>\n> Greetings from Germany,\n> Torsten\nI am running all my production work on 8.4 at this point; no problems of\nnote.\n\n-- Karl", "msg_date": "Sat, 17 Oct 2009 22:55:43 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are folks running 8.4 in production environments? and\n\t8.4 and slon 1.2?" }, { "msg_contents": "> Torsten Zühlsdorff wrote:\n>\n> >\n> > I've upgraded all my databases to 8.4. They pain was not so big, the\n> > new -j Parameter from pg_restore is fantastic. I really like the new\n> > functions around Pl/PGSQL. All is stable and fast.\n> >\n> > Greetings from Germany,\n> > Torsten\n>\n\nOn Sat, Oct 17, 2009 at 8:55 PM, Karl Denninger <[email protected]> wrote:\n\n> I am running all my production work on 8.4 at this point; no problems of\n> note.\n>\n> -- Karl\n>\n\nThanks guys, ya running slon , must be replicated so it seems 8.4 is good,\nbut need to wait for the slon+pg to be good..\n\nAlso not real pain? A full dump and restore again, can't see that not being\npainful for a DB of any real size.\n\nThanks for the response guys.\n\nUnd Danke Torsten :)\n\nTory\n\nTorsten Zühlsdorff wrote:\n\n>\n> I've upgraded all my databases to 8.4. They pain was not so big, the\n> new -j Parameter from pg_restore is fantastic. I really like the new\n> functions around Pl/PGSQL. All is stable and fast.\n>\n> Greetings from Germany,\n> TorstenOn Sat, Oct 17, 2009 at 8:55 PM, Karl Denninger <[email protected]> wrote: \n\nI am running all my production work on 8.4 at this point; no problems of\nnote.\n\n-- KarlThanks guys, ya running slon , must be replicated so it seems 8.4 is good, but need to wait for the slon+pg to be good..Also not real pain? A full dump and restore again, can't see that not being painful for a DB of any real size.\nThanks for the response guys.Und Danke Torsten :)Tory", "msg_date": "Sat, 17 Oct 2009 21:08:36 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are folks running 8.4 in production environments? and\n\t8.4 and slon 1.2?" }, { "msg_contents": "Tory M Blue wrote:\n>\n> Torsten Z�hlsdorff wrote:\n>\n> >\n> > I've upgraded all my databases to 8.4. They pain was not so big, the\n> > new -j Parameter from pg_restore is fantastic. I really like the new\n> > functions around Pl/PGSQL. All is stable and fast.\n> >\n> > Greetings from Germany,\n> > Torsten\n>\n>\n> On Sat, Oct 17, 2009 at 8:55 PM, Karl Denninger <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> I am running all my production work on 8.4 at this point; no\n> problems of\n> note.\n>\n> -- Karl\n>\n>\n> Thanks guys, ya running slon , must be replicated so it seems 8.4 is\n> good, but need to wait for the slon+pg to be good..\n>\n> Also not real pain? A full dump and restore again, can't see that not\n> being painful for a DB of any real size.\n>\n> Thanks for the response guys.\n>\n> Und Danke Torsten :)\n>\n> Tory\nI am running Slony on 8.4; it complains on init but I have checked it\nEXTENSIVELY and it is replicating fine. I have a distributed forum\napplication that uses Slony as part of the backend architecture and it\nwould choke INSTANTLY if there were problems.\n\nDidn't dump and restore. This is how I migrated it:\n\n1. Set up 8.4 on the same machine.\n2. Use slony to add the 8.4 \"instance\" as a slave.\n3. Wait for sync.\n4. Stop client software.\n5. Change master to the 8.4 instance.\n6. Shut down 8.3 instance, change 8.4 instance to the original port\n(that 8.3 was running on) and restart.\n7. Bring client application back up.\n8. Verify all is ok, drop the 8.3 instance from replication.\n\nTotal downtime was about 2 minutes for steps 4-8, the long wait was for\n#3, which has no consequence for the clients.\n\nNote that this requires 2x disk storage + some + enough I/O and CPU\nbandwidth to get away with the additional replication on the master\nmachine. If you don't have that you need a second machine you can do\nthis to and then swap the client code to run against when the\nreplication is complete.\n\n-- Karl", "msg_date": "Sat, 17 Oct 2009 23:13:47 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are folks running 8.4 in production environments? and\n\t8.4 and slon 1.2?" }, { "msg_contents": "On Sat, Oct 17, 2009 at 10:13 PM, Karl Denninger <[email protected]> wrote:\n> Tory M Blue wrote:\n> > Also not real pain? A full dump and restore again, can't see that not being\n> > painful for a DB of any real size.\n\n> I am running Slony on 8.4; it complains on init but I have checked it\n> EXTENSIVELY and it is replicating fine.  I have a distributed forum\n> application that uses Slony as part of the backend architecture and it would\n> choke INSTANTLY if there were problems.\n>\n> Didn't dump and restore.\n\nYeah, we haven't done that since pgsql 8.1. Now on 8.3. Won't be\ndoing it for 8.4 either.\n\nWe use slony to do it here and we wait until it's proven stable as a\nslave before we promote 8.4 / new slony to take over.\n", "msg_date": "Sat, 17 Oct 2009 23:05:40 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are folks running 8.4 in production environments? and\n\t8.4 and slon 1.2?" } ]
[ { "msg_contents": "I'm doing \\copy from file into table. There are two files one with 7 million\nlines and the other with around 24 million and the data goes into separate\ntable. There are only three columns in each file and four in each table (the\nprimary key, id serial is the fourt). The data is about 150 MB and 450 MB\nand takes from 5 to 20 minutes to load into the database.\n\nWhat I'm wondering about is what parameters to tweak to improve the\noperation and shorten the time of the \\copy ? I think I have tweaked most of\nthe available in postgresql.conf, that is shared_buffer, temp_buffers,\nwork_mem, maintenance_work_mem, max_fsm_pages. Maybe someone could point out\nthe one really related to \\copy ?\n\nI would hope that there is some way for me to improve the operation. I used\nto do the operation on a MySQL server and simple time measurements gives me\na difference of a multiple 3 to 4, where the MySQL is faster.\n\nI would also be satisfied to know if this is an expected difference.\n\nRegards, Sigurgeir\n\nI'm doing \\copy from file into table. There are two files one with 7 million lines and the other with around 24 million and the data goes into separate table. There are only three columns in each file and four in each table (the primary key, id serial is the fourt). The data is about 150 MB and 450 MB and takes from 5 to 20 minutes to load into the database.\nWhat I'm wondering about is what parameters to tweak to improve the operation and shorten the time of the \\copy ? I think I have tweaked most of the available in postgresql.conf, that is shared_buffer, temp_buffers, work_mem, maintenance_work_mem, max_fsm_pages. Maybe someone could point out the one really related to \\copy ?\nI would hope that there is some way for me to improve the operation. I used to do the operation on a MySQL server and simple time measurements gives me a difference of a multiple 3 to 4, where the MySQL is faster.\nI would also be satisfied to know if this is an expected difference.Regards, Sigurgeir", "msg_date": "Mon, 12 Oct 2009 22:05:36 +0000", "msg_from": "Sigurgeir Gunnarsson <[email protected]>", "msg_from_op": true, "msg_subject": "Issues with \\copy from file" }, { "msg_contents": "Sigurgeir Gunnarsson escreveu:\n> What I'm wondering about is what parameters to tweak to improve the\n> operation and shorten the time of the \\copy ? I think I have tweaked\n> most of the available in postgresql.conf, that is shared_buffer,\n> temp_buffers, work_mem, maintenance_work_mem, max_fsm_pages. Maybe\n> someone could point out the one really related to \\copy ?\n> \nYou don't show us your table definitions. You don't say what postgresql\nversion you're using. Let's suppose archiving is disabled, you're bulk loading\ntable foo and, you're using version >= 8.3. Just do:\n\nBEGIN;\nTRUNCATE TABLE foo;\nCOPY foo FROM ...;\nCOMMIT;\n\nPostgreSQL will skip WAL writes and just fsync() the table at the end of the\ncommand.\n\nAlso, take a look at [1].\n\n[1] http://www.postgresql.org/docs/current/interactive/populate.html\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Sun, 18 Oct 2009 03:33:27 -0200", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with \\copy from file" }, { "msg_contents": "On Mon, Oct 12, 2009 at 4:05 PM, Sigurgeir Gunnarsson\n<[email protected]> wrote:\n> I'm doing \\copy from file into table. There are two files one with 7 million\n> lines and the other with around 24 million and the data goes into separate\n> table. There are only three columns in each file and four in each table (the\n> primary key, id serial is the fourt). The data is about 150 MB and 450 MB\n> and takes from 5 to 20 minutes to load into the database.\n\nYou can only write data then commit it so fast to one drive, and that\nspeed is usually somewhere in the megabyte per second range. 450+150\nin 5 minutes is 120 Megs per second, that's pretty fast, but is likely\nthe max speed of a modern super fast 15k rpm drive. If it's taking 20\nminutes then it's 30 Megs per second which is still really good if\nyou're in the middle of a busy afternoon and the db has other things\nto do.\n\nThe key here is monitoring your machine to see what you're maxing out.\n If you're at 100% IO then cpu tricks and tuning aren't likely to\nhelp. Unless you can reduce the IO load to do the same thing (things\nlike turning off fsync might help streamline some writes.)\n\nTo really tell what the numbers bi / bo / wa mean you really need to\nrun some artificial tests to see what your machine can do at\nquiescence. If you can get 120Meg per second streamed, and 20 Meg per\nsecond random on 8k blocks, then 5 minutes is the top side of what you\ncan ever expect to get. If you can get 600Meg per sec then you're\nthere yet, and might need multiple threads to load data fast.\n\npg_restore supports the -j switch for this. But it only works on\nseparate tables so you'd be limited to two at once right now since\nthere's two tables.\n\n> What I'm wondering about is what parameters to tweak to improve the\n> operation and shorten the time of the \\copy ?\n\ncopy has a minimum cost in time per megabyte that you can't get out\nof. The trick is knowing when you've gotten there (or damned close)\nand quit banging your head on the wall about it.\n\n\n> I think I have tweaked most of\n> the available in postgresql.conf, that is shared_buffer, temp_buffers,\n> work_mem, maintenance_work_mem, max_fsm_pages. Maybe someone could point out\n> the one really related to \\copy ?\n\nTry cranking up your checkpoint segments to several hundred. Note\nthis may delay restart on a crash. If you crash a lot you have other\nproblems, but still, it lets you know that if someone trips over a\ncord in the afternoon you're gonna have to wait 10 or 20 or 30 minutes\nfor the machine to come back up as it replays the log files.\n\n\n> I would hope that there is some way for me to improve the operation. I used\n> to do the operation on a MySQL server and simple time measurements gives me\n> a difference of a multiple 3 to 4, where the MySQL is faster.\n\nWith innodb tables? If it's myisam tables it doesn't really count,\nunless your data is unimportant. In which case myisam may be the\nbetter choice.\n\n> I would also be satisfied to know if this is an expected difference.\n\nI'm not entirely sure it's a difference really. I can believe one if\nI see it on my hardware, where I run both dbs. Pgsql is much faster\non my machines that mysql for this type of stuff.\n\nNote that reading the file from the same file system that you're\nwriting to is gonna be slow. It'd likely be fastest to read from one\ndrive that is FAST but not the main storage drive.\n\nNext, are pg_xlog files on the same partition as the main db?\n\nI can copy files into my big servers in teh 350 to 450\nmegabytes/second range if the machines are otherwise quiet (early am)\nand sustain 150 to 200 even during moderately high loads during the\nday.\n", "msg_date": "Sun, 18 Oct 2009 00:10:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with \\copy from file" }, { "msg_contents": "On Sun, 18 Oct 2009, Scott Marlowe wrote:\n> You can only write data then commit it so fast to one drive, and that\n> speed is usually somewhere in the megabyte per second range. 450+150\n> in 5 minutes is 120 Megs per second, that's pretty fast, but is likely\n> the max speed of a modern super fast 15k rpm drive. If it's taking 20\n> minutes then it's 30 Megs per second which is still really good if\n> you're in the middle of a busy afternoon and the db has other things\n> to do.\n\nYou're out by a factor of 60. That's minutes, not seconds.\n\nMore relevant is the fact that Postgres will normally log changes in the \nWAL, effectively writing the data twice. As Euler said, the trick is to \ntell Postgres that noone else will need to see the data, so it can skip \nthe WAL step:\n\n> BEGIN;\n> TRUNCATE TABLE foo;\n> COPY foo FROM ...;\n> COMMIT;\n\nI see upward of 100MB/s over here when I do this.\n\nMatthew\n\n-- \n Patron: \"I am looking for a globe of the earth.\"\n Librarian: \"We have a table-top model over here.\"\n Patron: \"No, that's not good enough. Don't you have a life-size?\"\n Librarian: (pause) \"Yes, but it's in use right now.\"\n", "msg_date": "Mon, 19 Oct 2009 10:35:55 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with \\copy from file" }, { "msg_contents": "I hope the issue is still open though I haven't replied to it before.\n\nEuler mentioned that I did not provide any details about my system. I'm\nusing version 8.3 and with most settings default on an old machine with 2 GB\nof mem. The table definition is simple, four columns; id, value, x, y where\nid is primary key and x, y are combined into an index.\n\nI'm not sure if it matters but unlike Euler's suggestion I'm using \\copy\ninstead of COPY. Regarding my comparison to MySQL, it is completely valid.\nThis is done on the same computer, using the same disk on the same platform.\n>From that I would derive that IO is not my problem, unless postgresql is\ndoing IO twice while MySQL only once.\n\nI guess my tables are InnoDB since that is the default type (or so I think).\nBEGIN/COMMIT I did not find change much. Are there any other suggestions ?\n\nMy postgres.conf:\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\nshared_buffers = 16MB # min 128kB or max_connections*16kB\ntemp_buffers = 16MB # min 800kB\n#max_prepared_transactions = 5 # can be 0 or more\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 128MB # min 64kB\nmaintenance_work_mem = 128MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n# - Free Space Map -\nmax_fsm_pages = 2097152 # min max_fsm_relations*16, 6 bytes\neach\nmax_fsm_relations = 500 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n#max_files_per_process = 1000 # min 25\n#shared_preload_libraries = '' # (change requires restart)\n# - Cost-Based Vacuum Delay -\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 1-10000 credits\n\n#------------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#------------------------------------------------------------------------------\n\n# - Settings -\n#fsync = on # turns forced synchronization on or\noff\n#synchronous_commit = on # immediate fsync at commit\n#wal_sync_method = fsync # the default is the first option\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = 64kB # min 32kB\n#wal_writer_delay = 200ms # 1-10000 milliseconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB\neach\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 -\n1.0\n#checkpoint_warning = 30s # 0 is off\n\n# - Archiving -\n#archive_mode = off # allows archiving to be done\n#archive_command = '' # command to use to archive a logfile\nsegment\n#archive_timeout = 0 # force a logfile segment switch after this\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\n\nautovacuum = on # Enable autovacuum subprocess?\n'on'\n\n\n2009/10/19 Matthew Wakeling <[email protected]>\n\n> On Sun, 18 Oct 2009, Scott Marlowe wrote:\n>\n>> You can only write data then commit it so fast to one drive, and that\n>> speed is usually somewhere in the megabyte per second range. 450+150\n>> in 5 minutes is 120 Megs per second, that's pretty fast, but is likely\n>> the max speed of a modern super fast 15k rpm drive. If it's taking 20\n>> minutes then it's 30 Megs per second which is still really good if\n>> you're in the middle of a busy afternoon and the db has other things\n>> to do.\n>>\n>\n> You're out by a factor of 60. That's minutes, not seconds.\n>\n> More relevant is the fact that Postgres will normally log changes in the\n> WAL, effectively writing the data twice. As Euler said, the trick is to tell\n> Postgres that noone else will need to see the data, so it can skip the WAL\n> step:\n>\n>\n> BEGIN;\n>> TRUNCATE TABLE foo;\n>> COPY foo FROM ...;\n>> COMMIT;\n>>\n>\n> I see upward of 100MB/s over here when I do this.\n>\n> Matthew\n>\n> --\n> Patron: \"I am looking for a globe of the earth.\"\n> Librarian: \"We have a table-top model over here.\"\n> Patron: \"No, that's not good enough. Don't you have a life-size?\"\n> Librarian: (pause) \"Yes, but it's in use right now.\"\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI hope the issue is still open though I haven't replied to it before.Euler mentioned that I did not provide any details about my system. I'm using version 8.3 and with most settings default on an old machine with 2 GB of mem. The table definition is simple, four columns; id, value, x, y where id is primary key and x, y are combined into an index.\nI'm not sure if it matters but unlike Euler's suggestion I'm using \\copy instead of COPY. Regarding my comparison to MySQL, it is completely valid. This is done on the same computer, using the same disk on the same platform. From that I would derive that IO is not my problem, unless postgresql is doing IO twice while MySQL only once.\nI guess my tables are InnoDB since that is the default type (or so I think). BEGIN/COMMIT I did not find change much. Are there any other suggestions ?My postgres.conf:#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)#------------------------------------------------------------------------------# - Memory -shared_buffers = 16MB                   # min 128kB or max_connections*16kBtemp_buffers = 16MB                     # min 800kB\n#max_prepared_transactions = 5          # can be 0 or more# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 128MB                        # min 64kBmaintenance_work_mem = 128MB            # min 1MB#max_stack_depth = 2MB                  # min 100kB# - Free Space Map -max_fsm_pages = 2097152                 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 500                 # min 100, ~70 bytes each# - Kernel Resource Usage -#max_files_per_process = 1000           # min 25#shared_preload_libraries = ''          # (change requires restart)\n# - Cost-Based Vacuum Delay -#vacuum_cost_delay = 0                  # 0-1000 milliseconds#vacuum_cost_page_hit = 1               # 0-10000 credits#vacuum_cost_page_miss = 10             # 0-10000 credits#vacuum_cost_page_dirty = 20            # 0-10000 credits\n#vacuum_cost_limit = 200                # 1-10000 credits#------------------------------------------------------------------------------# WRITE AHEAD LOG#------------------------------------------------------------------------------\n# - Settings -#fsync = on                             # turns forced synchronization on or off#synchronous_commit = on                # immediate fsync at commit#wal_sync_method = fsync                # the default is the first option \n#full_page_writes = on                  # recover from partial page writes#wal_buffers = 64kB                     # min 32kB#wal_writer_delay = 200ms               # 1-10000 milliseconds#commit_delay = 0                       # range 0-100000, in microseconds\n#commit_siblings = 5                    # range 1-1000# - Checkpoints -checkpoint_segments = 64                # in logfile segments, min 1, 16MB each#checkpoint_timeout = 5min              # range 30s-1h\n#checkpoint_completion_target = 0.9     # checkpoint target duration, 0.0 - 1.0#checkpoint_warning = 30s               # 0 is off# - Archiving -#archive_mode = off             # allows archiving to be done\n#archive_command = ''           # command to use to archive a logfile segment#archive_timeout = 0            # force a logfile segment switch after this#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS#------------------------------------------------------------------------------autovacuum = on                         # Enable autovacuum subprocess?  'on' \n2009/10/19 Matthew Wakeling <[email protected]>\nOn Sun, 18 Oct 2009, Scott Marlowe wrote:\n\nYou can only write data then commit it so fast to one drive, and that\nspeed is usually somewhere in the megabyte per second range.  450+150\nin 5 minutes is 120 Megs per second, that's pretty fast, but is likely\nthe max speed of a modern super fast 15k rpm drive.  If it's taking 20\nminutes then it's 30 Megs per second which is still really good if\nyou're in the middle of a busy afternoon and the db has other things\nto do.\n\n\nYou're out by a factor of 60. That's minutes, not seconds.\n\nMore relevant is the fact that Postgres will normally log changes in the WAL, effectively writing the data twice. As Euler said, the trick is to tell Postgres that noone else will need to see the data, so it can skip the WAL step:\n\n\n\nBEGIN;\nTRUNCATE TABLE foo;\nCOPY foo FROM ...;\nCOMMIT;\n\n\nI see upward of 100MB/s over here when I do this.\n\nMatthew\n\n-- \nPatron: \"I am looking for a globe of the earth.\"\nLibrarian: \"We have a table-top model over here.\"\nPatron: \"No, that's not good enough. Don't you have a life-size?\"\nLibrarian: (pause) \"Yes, but it's in use right now.\"\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 18 Dec 2009 12:46:37 +0000", "msg_from": "Sigurgeir Gunnarsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with \\copy from file" }, { "msg_contents": "On Fri, Dec 18, 2009 at 7:46 AM, Sigurgeir Gunnarsson\n<[email protected]> wrote:\n> I hope the issue is still open though I haven't replied to it before.\n>\n> Euler mentioned that I did not provide any details about my system. I'm\n> using version 8.3 and with most settings default on an old machine with 2 GB\n> of mem. The table definition is simple, four columns; id, value, x, y where\n> id is primary key and x, y are combined into an index.\n>\n> I'm not sure if it matters but unlike Euler's suggestion I'm using \\copy\n> instead of COPY. Regarding my comparison to MySQL, it is completely valid.\n> This is done on the same computer, using the same disk on the same platform.\n> From that I would derive that IO is not my problem, unless postgresql is\n> doing IO twice while MySQL only once.\n>\n> I guess my tables are InnoDB since that is the default type (or so I think).\n> BEGIN/COMMIT I did not find change much. Are there any other suggestions ?\n\nDid you read Matthew Wakeling's reply? Arranging to skip WAL will\nhelp a lot here. To do that, you need to either create or truncate\nthe table in the same transaction that does the COPY.\n\nThe problem with the MySQL comparison is that it's not really\nrelevant. It isn't that the PostgreSQL code just sucks and if we\nwrote it properly it would be as fast as MySQL. If that were the\ncase, everyone would be up in arms, and it would have been fixed long\nago. Rather, the problem is almost certainly that it's not an\napples-to-apples comparison. MySQL is probably doing something\ndifferent, such as perhaps not properly arranging for recovery if the\nsystem goes down in the middle of the copy, or just after it\ncompletes. But I don't know MySQL well enough to know exactly what\nthe difference is, and I'm not particularly interested in spending a\nlot of time figuring it out. I think you'll get that reaction from\nothers on this list as well, but of course that's up to them.\nEverybody here is a volunteer, of course, and generally our interest\nis principally PostgreSQL.\n\nOn the other hand, we can certainly give you lots of information about\nwhat PostgreSQL is doing and why that takes the amount of time that it\ndoes, or give you information on how you can find out more about what\nit's doing.\n\n...Robert\n", "msg_date": "Fri, 18 Dec 2009 10:23:01 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with \\copy from file" }, { "msg_contents": "The intention was never to talk down postgresql but rather trying to get\nsome explanation of this difference so that I could do the proper changes.\n\nAfter having read the link from Euler's post, which I oversaw, I have\nmanaged to shorten the import time. My problem was with the indexes. I was\nable to shorten the import time, of a 26 million line import, from 2 hours +\n(I gave up after that time) downto 12 minutes by dropping the indexes after\ntruncate and before copy.\n\nThis is what I was expecting and I'm therefore satisfied with the result.\n\nRegards, Sigurgeir\n\n2009/12/18 Robert Haas <[email protected]>\n\n> On Fri, Dec 18, 2009 at 7:46 AM, Sigurgeir Gunnarsson\n> <[email protected]> wrote:\n> > I hope the issue is still open though I haven't replied to it before.\n> >\n> > Euler mentioned that I did not provide any details about my system. I'm\n> > using version 8.3 and with most settings default on an old machine with 2\n> GB\n> > of mem. The table definition is simple, four columns; id, value, x, y\n> where\n> > id is primary key and x, y are combined into an index.\n> >\n> > I'm not sure if it matters but unlike Euler's suggestion I'm using \\copy\n> > instead of COPY. Regarding my comparison to MySQL, it is completely\n> valid.\n> > This is done on the same computer, using the same disk on the same\n> platform.\n> > From that I would derive that IO is not my problem, unless postgresql is\n> > doing IO twice while MySQL only once.\n> >\n> > I guess my tables are InnoDB since that is the default type (or so I\n> think).\n> > BEGIN/COMMIT I did not find change much. Are there any other suggestions\n> ?\n>\n> Did you read Matthew Wakeling's reply? Arranging to skip WAL will\n> help a lot here. To do that, you need to either create or truncate\n> the table in the same transaction that does the COPY.\n>\n> The problem with the MySQL comparison is that it's not really\n> relevant. It isn't that the PostgreSQL code just sucks and if we\n> wrote it properly it would be as fast as MySQL. If that were the\n> case, everyone would be up in arms, and it would have been fixed long\n> ago. Rather, the problem is almost certainly that it's not an\n> apples-to-apples comparison. MySQL is probably doing something\n> different, such as perhaps not properly arranging for recovery if the\n> system goes down in the middle of the copy, or just after it\n> completes. But I don't know MySQL well enough to know exactly what\n> the difference is, and I'm not particularly interested in spending a\n> lot of time figuring it out. I think you'll get that reaction from\n> others on this list as well, but of course that's up to them.\n> Everybody here is a volunteer, of course, and generally our interest\n> is principally PostgreSQL.\n>\n> On the other hand, we can certainly give you lots of information about\n> what PostgreSQL is doing and why that takes the amount of time that it\n> does, or give you information on how you can find out more about what\n> it's doing.\n>\n> ...Robert\n>\n\nThe intention was never to talk down postgresql but rather trying to get some explanation of this difference so that I could do the proper changes.After having read the link from Euler's post, which I oversaw, I have managed to shorten the import time. My problem was with the indexes. I was able to shorten the import time, of a 26 million line import, from 2 hours + (I gave up after that time) downto 12 minutes by dropping the indexes after truncate and before copy.\nThis is what I was expecting and I'm therefore satisfied with the result.Regards, Sigurgeir2009/12/18 Robert Haas <[email protected]>\nOn Fri, Dec 18, 2009 at 7:46 AM, Sigurgeir Gunnarsson\n<[email protected]> wrote:\n> I hope the issue is still open though I haven't replied to it before.\n>\n> Euler mentioned that I did not provide any details about my system. I'm\n> using version 8.3 and with most settings default on an old machine with 2 GB\n> of mem. The table definition is simple, four columns; id, value, x, y where\n> id is primary key and x, y are combined into an index.\n>\n> I'm not sure if it matters but unlike Euler's suggestion I'm using \\copy\n> instead of COPY. Regarding my comparison to MySQL, it is completely valid.\n> This is done on the same computer, using the same disk on the same platform.\n> From that I would derive that IO is not my problem, unless postgresql is\n> doing IO twice while MySQL only once.\n>\n> I guess my tables are InnoDB since that is the default type (or so I think).\n> BEGIN/COMMIT I did not find change much. Are there any other suggestions ?\n\nDid you read Matthew Wakeling's reply?  Arranging to skip WAL will\nhelp a lot here.  To do that, you need to either create or truncate\nthe table in the same transaction that does the COPY.\n\nThe problem with the MySQL comparison is that it's not really\nrelevant.   It isn't that the PostgreSQL code just sucks and if we\nwrote it properly it would be as fast as MySQL.  If that were the\ncase, everyone would be up in arms, and it would have been fixed long\nago.  Rather, the problem is almost certainly that it's not an\napples-to-apples comparison.  MySQL is probably doing something\ndifferent, such as perhaps not properly arranging for recovery if the\nsystem goes down in the middle of the copy, or just after it\ncompletes.  But I don't know MySQL well enough to know exactly what\nthe difference is, and I'm not particularly interested in spending a\nlot of time figuring it out.  I think you'll get that reaction from\nothers on this list as well, but of course that's up to them.\nEverybody here is a volunteer, of course, and generally our interest\nis principally PostgreSQL.\n\nOn the other hand, we can certainly give you lots of information about\nwhat PostgreSQL is doing and why that takes the amount of time that it\ndoes, or give you information on how you can find out more about what\nit's doing.\n\n...Robert", "msg_date": "Fri, 18 Dec 2009 15:51:48 +0000", "msg_from": "Sigurgeir Gunnarsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with \\copy from file" }, { "msg_contents": "On Fri, Dec 18, 2009 at 10:51 AM, Sigurgeir Gunnarsson\n<[email protected]> wrote:\n> The intention was never to talk down postgresql but rather trying to get\n> some explanation of this difference so that I could do the proper changes.\n>\n> After having read the link from Euler's post, which I oversaw, I have\n> managed to shorten the import time. My problem was with the indexes. I was\n> able to shorten the import time, of a 26 million line import, from 2 hours +\n> (I gave up after that time) downto 12 minutes by dropping the indexes after\n> truncate and before copy.\n>\n> This is what I was expecting and I'm therefore satisfied with the result.\n\nAh ha! Well, it sounds like perhaps you have the answer to what was\ncausing the difference too, then. I'm not trying to be unhelpful,\njust trying to explain honestly why you might not get exactly the\nresponse you expect to MySQL comparisons - we only understand half of\nit.\n\n...Robert\n", "msg_date": "Fri, 18 Dec 2009 19:24:31 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with \\copy from file" } ]
[ { "msg_contents": "Hi,\n\nI want to select the last contact of person via mail.\nMy sample database is build with the following shell-commands \n\n| createdb -U postgres test2\n| psql -U postgres test2 < mail_db.sql\n| mailtest.sh | psql -U postgres\n\nI call to get the answer\n\n| SELECT address, max(sent) from mail inner join\n| tomail on (mail.id=tomail.mail) group by address;\n\nThe result is ok, but it's to slow.\nThe query plan, see below, tells that there two seq scans.\nHowto transforms them into index scans?\n\npostgres ignores simple indexes on column sent.\nAn Index on two tables is not possible (if I understand the manual\ncorrectly).\n\nAny other idea howto speed up?\n\nCiao\n\nMichael\n\n===================\n\ntest2=# explain analyze SELECT address, max(sent) from mail inner join\ntomail on (mail.id=tomail.mail) group by address;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=36337.00..36964.32 rows=50186 width=20) (actual\ntime=3562.136..3610.238 rows=50000 loops=1)\n -> Hash Join (cost=14191.00..33837.00 rows=500000 width=20) (actual time=1043.537..2856.933 rows=500000 loops=1)\n Hash Cond: (tomail.mail = mail.id)\n -> Seq Scan on tomail (cost=0.00..8396.00 rows=500000 width=20) (actual time=0.014..230.264 rows=500000 loops=1)\n -> Hash (cost=7941.00..7941.00 rows=500000 width=8) (actual time=1042.996..1042.996 rows=500000 loops=1)\n -> Seq Scan on mail (cost=0.00..7941.00 rows=500000 width=8) (actual time=0.018..362.101 rows=500000 loops=1)\n Total runtime: 3629.449 ms\n(7 rows)", "msg_date": "Tue, 13 Oct 2009 10:59:03 +0200", "msg_from": "Michael Schwipps <[email protected]>", "msg_from_op": true, "msg_subject": "index on two tables or Howto speedup max/aggregate-function" }, { "msg_contents": "On Tue, Oct 13, 2009 at 9:59 AM, Michael Schwipps <[email protected]>wrote:\n\n> Hi,\n>\n> I want to select the last contact of person via mail.\n> My sample database is build with the following shell-commands\n>\n> | createdb -U postgres test2\n> | psql -U postgres test2 < mail_db.sql\n> | mailtest.sh | psql -U postgres\n>\n> I call to get the answer\n>\n> | SELECT address, max(sent) from mail inner join\n> | tomail on (mail.id=tomail.mail) group by address;\n>\n> you are missing vacuumdb -z test2\nafter mailtest.sh ..\n\n\n\n\n-- \nGJ\n\nOn Tue, Oct 13, 2009 at 9:59 AM, Michael Schwipps <[email protected]> wrote:\nHi,\n\nI want to select the last contact of person via mail.\nMy sample database is build with the following shell-commands\n\n| createdb -U postgres test2\n| psql -U postgres test2 < mail_db.sql\n| mailtest.sh | psql -U postgres\n\nI call to get the answer\n\n| SELECT address, max(sent) from mail inner join\n| tomail on (mail.id=tomail.mail) group by address;\nyou are missing vacuumdb -z test2after mailtest.sh .. -- GJ", "msg_date": "Tue, 13 Oct 2009 10:20:49 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index on two tables or Howto speedup\n\tmax/aggregate-function" }, { "msg_contents": "Hi,\n\nCREATE INDEX mail_id_sent_idx ON mail(id,sent)\n\nshould do the trick? Of course you can only replace one of the two \nscans by an index scan since there are no other conditions...\n\nJacques.\n\nAt 09:59 13/10/2009, Michael Schwipps wrote:\n>Hi,\n>\n>I want to select the last contact of person via mail.\n>My sample database is build with the following shell-commands\n>\n>| createdb -U postgres test2\n>| psql -U postgres test2 < mail_db.sql\n>| mailtest.sh | psql -U postgres\n>\n>I call to get the answer\n>\n>| SELECT address, max(sent) from mail inner join\n>| tomail on (mail.id=tomail.mail) group by address;\n>\n>The result is ok, but it's to slow.\n>The query plan, see below, tells that there two seq scans.\n>Howto transforms them into index scans?\n>\n>postgres ignores simple indexes on column sent.\n>An Index on two tables is not possible (if I understand the manual\n>correctly).\n>\n>Any other idea howto speed up?\n>\n>Ciao\n>\n>Michael\n>\n>===================\n>\n>test2=# explain analyze SELECT address, max(sent) from mail inner join\n>tomail on (mail.id=tomail.mail) group by address;\n> QUERY PLAN\n>-------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=36337.00..36964.32 rows=50186 width=20) (actual\n>time=3562.136..3610.238 rows=50000 loops=1)\n> -> Hash Join (cost=14191.00..33837.00 rows=500000 width=20) \n> (actual time=1043.537..2856.933 rows=500000 loops=1)\n> Hash Cond: (tomail.mail = mail.id)\n> -> Seq Scan on tomail (cost=0.00..8396.00 rows=500000 \n> width=20) (actual time=0.014..230.264 rows=500000 loops=1)\n> -> Hash (cost=7941.00..7941.00 rows=500000 width=8) \n> (actual time=1042.996..1042.996 rows=500000 loops=1)\n> -> Seq Scan on mail (cost=0.00..7941.00 \n> rows=500000 width=8) (actual time=0.018..362.101 rows=500000 loops=1)\n> Total runtime: 3629.449 ms\n>(7 rows)\n>\n>\n>\n>\n>--\n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 13 Oct 2009 11:10:15 +0100", "msg_from": "Jacques Caron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index on two tables or Howto speedup\n max/aggregate-function" } ]
[ { "msg_contents": "Hi,\n\nI am running performance simulation against a DB. I want to randomly pull\ndifferent records from a large table. However the table has no columns that\nhold sequential integer values (1..MAX), i.e. the columns all have \"holes\"\n(due to earlier filtering). Also PG does not have a concept of an\nauto-increment pseudo-column like Oracle's \"rownum\". Any suggestions?\n\nThanks,\n\n-- Shaul\n\nHi,I am running performance simulation against a DB. I want to randomly pull different records from a large table. However the table has no columns that hold sequential integer values (1..MAX), i.e. the columns all have \"holes\" (due to earlier filtering). Also PG does not have a concept of an auto-increment pseudo-column like Oracle's \"rownum\". Any suggestions?\nThanks,-- Shaul", "msg_date": "Tue, 13 Oct 2009 17:17:10 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Getting a random row" }, { "msg_contents": "On Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\n\n> Hi,\n>\n> I am running performance simulation against a DB. I want to randomly pull\n> different records from a large table. However the table has no columns that\n> hold sequential integer values (1..MAX), i.e. the columns all have \"holes\"\n> (due to earlier filtering).\n>\nwhat do yo umean ? you can restrict range of integer column (or any other\ntype) with constraints, for instance CHECK foo( a between 1 and 100);\n\n\n\n> Also PG does not have a concept of an auto-increment pseudo-column like\n> Oracle's \"rownum\". Any suggestions?\n>\nnot true - it has sequences, and pseudo type serial. Rtfm!.\n\n\n\n-- \nGJ\n\nOn Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\nHi,I am running performance simulation against a DB. I want to randomly pull different records from a large table. However the table has no columns that hold sequential integer values (1..MAX), i.e. the columns all have \"holes\" (due to earlier filtering). \nwhat do yo umean ? you can restrict range of integer column (or any other type) with constraints, for instance CHECK foo( a between 1 and 100);  \nAlso PG does not have a concept of an auto-increment pseudo-column like Oracle's \"rownum\". Any suggestions?not true - it has sequences, and pseudo type serial. Rtfm!.\n -- GJ", "msg_date": "Tue, 13 Oct 2009 16:19:40 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "Shaul Dar, 13.10.2009 17:17:\n> Also PG does not have a concept of an auto-increment pseudo-column\n> like Oracle's \"rownum\". Any suggestions?\n\nYes it does (at least 8.4)\n\nSELECT row_number() over(), the_other_columns...\nFROM your_table\n\nSo you could do something like:\n\nSELECT * \nFROM (\n SELECT row_number() over() as rownum, \n the_other_columns...\n FROM your_table\n) t \nWHERE t.rownum = a_random_integer_value_lower_than_rowcount;\n\nThomas\n\n\n\n", "msg_date": "Tue, 13 Oct 2009 17:29:40 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "Sorry, I guess I wasn't clear.\nI have an existing table in my DB, and it doesn't have a column with serial\nvalues (actually it did originally, but due to later deletions of about 2/3\nof the rows the column now has \"holes\"). I realize I could add a new serial\ncolumn, but prefer not to change table + the new column would also become\nnonconsecutive after further deletions. The nice thing about Oracle's\n\"rownum\" is that it' a pseudo-column\", not a real one, and AFAIK is always\nvalid.\n\nSuggestions?\n\n-- Shaul\n\n2009/10/13 Grzegorz Jaśkiewicz <[email protected]>\n\n>\n>\n> On Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I am running performance simulation against a DB. I want to randomly pull\n>> different records from a large table. However the table has no columns that\n>> hold sequential integer values (1..MAX), i.e. the columns all have \"holes\"\n>> (due to earlier filtering).\n>>\n> what do yo umean ? you can restrict range of integer column (or any other\n> type) with constraints, for instance CHECK foo( a between 1 and 100);\n>\n>\n>> Also PG does not have a concept of an auto-increment pseudo-column like\n>> Oracle's \"rownum\". Any suggestions?\n>>\n> not true - it has sequences, and pseudo type serial. Rtfm!.\n>\n>\n>\n> --\n> GJ\n>\n\nSorry, I guess I wasn't clear.I have an existing table in my DB, and it doesn't have a column with serial values (actually it did originally, but due to later deletions of about 2/3 of the rows the column now has \"holes\"). I realize I could add a new serial column, but prefer not to change table + the new column would also become nonconsecutive after further deletions. The nice thing about Oracle's \"rownum\" is that it' a pseudo-column\", not a real one, and AFAIK is always valid.\nSuggestions?-- Shaul2009/10/13 Grzegorz Jaśkiewicz <[email protected]>\nOn Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\nHi,I am running performance simulation against a DB. I want to randomly pull different records from a large table. However the table has no columns that hold sequential integer values (1..MAX), i.e. the columns all have \"holes\" (due to earlier filtering). \nwhat do yo umean ? you can restrict range of integer column (or any other type) with constraints, for instance CHECK foo( a between 1 and 100); \nAlso PG does not have a concept of an auto-increment pseudo-column like Oracle's \"rownum\". Any suggestions?not true - it has sequences, and pseudo type serial. Rtfm!.\n\n -- GJ", "msg_date": "Tue, 13 Oct 2009 17:30:52 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "\nOn Oct 13, 2009, at 11:19 , Grzegorz Jaśkiewicz wrote:\n\n> On Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\n>\n>\n>> Also PG does not have a concept of an auto-increment pseudo-column \n>> like\n>> Oracle's \"rownum\". Any suggestions?\n>>\n> not true - it has sequences, and pseudo type serial. Rtfm!.\n\nAIUI, rownum applies numbering to output rows in a SELECT statement, \nrather than some actual column of the table, which is likely what the \nOP is getting at.\n\nhttp://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Tue, 13 Oct 2009 11:33:51 -0400", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "2009/10/13 Shaul Dar <[email protected]>\n\n> Sorry, I guess I wasn't clear.\n> I have an existing table in my DB, and it doesn't have a column with serial\n> values (actually it did originally, but due to later deletions of about 2/3\n> of the rows the column now has \"holes\"). I realize I could add a new serial\n> column, but prefer not to change table + the new column would also become\n> nonconsecutive after further deletions. The nice thing about Oracle's\n> \"rownum\" is that it' a pseudo-column\", not a real one, and AFAIK is always\n> valid.\n>\nchange the default of that column to use sequence.\nFor instance, lookup CREATE SEQUENCE in manual, and ALTER TABLE .. SET\nDEFAULT ..\n\nfor example of how it looks, just create table foo(a serial), and check its\ndefinition with \\d+ foo\n\n\n\n-- \nGJ\n\n2009/10/13 Shaul Dar <[email protected]>\nSorry, I guess I wasn't clear.I have an existing table in my DB, and it doesn't have a column with serial values (actually it did originally, but due to later deletions of about 2/3 of the rows the column now has \"holes\"). I realize I could add a new serial column, but prefer not to change table + the new column would also become nonconsecutive after further deletions. The nice thing about Oracle's \"rownum\" is that it' a pseudo-column\", not a real one, and AFAIK is always valid.\nchange the default of that column to use sequence.For instance, lookup CREATE SEQUENCE in manual, and ALTER TABLE .. SET DEFAULT ..for example of how it looks, just create table foo(a serial), and check its definition with \\d+ foo  \n-- GJ", "msg_date": "Tue, 13 Oct 2009 16:39:43 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "Michael,\n\nYou are right. I didn't remember the semantics, and Oracle's rownum would\nnot have been helpful. But the new row_number() in 8.4 would probably do the\ntrick (though I use 8.3.7 :-( )\n\n-- Shaul\n\n2009/10/13 Michael Glaesemann <[email protected]>\n\n>\n> On Oct 13, 2009, at 11:19 , Grzegorz Jaśkiewicz wrote:\n>\n> On Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\n>>\n>>\n>> Also PG does not have a concept of an auto-increment pseudo-column like\n>>> Oracle's \"rownum\". Any suggestions?\n>>>\n>>> not true - it has sequences, and pseudo type serial. Rtfm!.\n>>\n>\n> AIUI, rownum applies numbering to output rows in a SELECT statement, rather\n> than some actual column of the table, which is likely what the OP is getting\n> at.\n>\n> http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html\n>\n> Michael Glaesemann\n> grzm seespotcode net\n>\n>\n>\n>\n\nMichael,You are right. I didn't remember the semantics, and Oracle's rownum would not have been helpful. But the new row_number() in 8.4 would probably do the trick (though I use 8.3.7 :-( )\n-- Shaul\n2009/10/13 Michael Glaesemann <[email protected]>\n\nOn Oct 13, 2009, at 11:19 , Grzegorz Jaśkiewicz wrote:\n\n\nOn Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar <[email protected]> wrote:\n\n\n\nAlso PG does not have a concept of an auto-increment pseudo-column like\nOracle's \"rownum\". Any suggestions?\n\n\nnot true - it has sequences, and pseudo type serial. Rtfm!.\n\n\nAIUI, rownum applies numbering to output rows in a SELECT statement, rather than some actual column of the table, which is likely what the OP is getting at.\n\nhttp://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html\n\nMichael Glaesemann\ngrzm seespotcode net", "msg_date": "Tue, 13 Oct 2009 17:42:21 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "2009/10/13 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> 2009/10/13 Shaul Dar <[email protected]>\n>>\n>> Sorry, I guess I wasn't clear.\n>> I have an existing table in my DB, and it doesn't have a column with\n>> serial values (actually it did originally, but due to later deletions of\n>> about 2/3 of the rows the column now has \"holes\"). I realize I could add a\n>> new serial column, but prefer not to change table + the new column would\n>> also become nonconsecutive after further deletions. The nice thing about\n>> Oracle's \"rownum\" is that it' a pseudo-column\", not a real one, and AFAIK is\n>> always valid.\n>\n> change the default of that column to use sequence.\n> For instance, lookup CREATE SEQUENCE in manual, and ALTER TABLE .. SET\n> DEFAULT ..\n>\n> for example of how it looks, just create table foo(a serial), and check its\n> definition with \\d+ foo\n\nThis is not really what he's trying to do. Oracle's rownum has\ncompletely different semantics than this.\n\nBut, on 8.4, a window function should do it.\n\n...Robert\n", "msg_date": "Tue, 13 Oct 2009 12:56:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "2009/10/13 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> 2009/10/13 Shaul Dar <[email protected]>\n>>\n>> Sorry, I guess I wasn't clear.\n>> I have an existing table in my DB, and it doesn't have a column with\n>> serial values (actually it did originally, but due to later deletions of\n>> about 2/3 of the rows the column now has \"holes\"). I realize I could add a\n>> new serial column, but prefer not to change table + the new column would\n>> also become nonconsecutive after further deletions. The nice thing about\n>> Oracle's \"rownum\" is that it' a pseudo-column\", not a real one, and AFAIK is\n>> always valid.\n>\n> change the default of that column to use sequence.\n> For instance, lookup CREATE SEQUENCE in manual, and ALTER TABLE .. SET\n> DEFAULT ..\n>\n> for example of how it looks, just create table foo(a serial), and check its\n> definition with \\d+ foo\n>\n>\n>\n> --\n> GJ\n>\n\n\nYou could emulate rownum (aka rank) using a TEMPORARY sequence applied\nto your result set.\n\nhttp://www.postgresql.org/docs/8.3/interactive/sql-createsequence.html\n\nNot sure if this is what you're after though?\n", "msg_date": "Tue, 13 Oct 2009 17:21:10 -0700", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "On Tue, Oct 13, 2009 at 9:17 AM, Shaul Dar <[email protected]> wrote:\n> Hi,\n>\n> I am running performance simulation against a DB. I want to randomly pull\n> different records from a large table. However the table has no columns that\n> hold sequential integer values (1..MAX), i.e. the columns all have \"holes\"\n> (due to earlier filtering). Also PG does not have a concept of an\n> auto-increment pseudo-column like Oracle's \"rownum\". Any suggestions?\n\nIf what you're trying to do is emulate a real world app which randomly\ngrabs rows, then you want to setup something ahead of time that has a\npseudo random order and not rely on using anything like order by\nrandom() limit 1 or anything like that. Easiest way is to do\nsomething like:\n\nselect id into randomizer from maintable order by random();\n\nthen use a cursor to fetch from the table to get \"random\" rows from\nthe real table.\n", "msg_date": "Tue, 13 Oct 2009 19:18:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "2009/10/14 Scott Marlowe <[email protected]>:\n>\n> If what you're trying to do is emulate a real world app which randomly\n> grabs rows, then you want to setup something ahead of time that has a\n> pseudo random order and not rely on using anything like order by\n> random() limit 1 or anything like that.  Easiest way is to do\n> something like:\n>\n> select id into randomizer from maintable order by random();\n>\n> then use a cursor to fetch from the table to get \"random\" rows from\n> the real table.\n>\n>\n\nWhy not just do something like:\n\nSELECT thisfield, thatfield\nFROM my_table\nWHERE thisfield IS NOT NULL\nORDER BY RANDOM()\nLIMIT 1;\n\nThom\n", "msg_date": "Wed, 14 Oct 2009 07:58:40 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "2009/10/14 Thom Brown <[email protected]>:\n> 2009/10/14 Scott Marlowe <[email protected]>:\n>>\n>> If what you're trying to do is emulate a real world app which randomly\n>> grabs rows, then you want to setup something ahead of time that has a\n>> pseudo random order and not rely on using anything like order by\n>> random() limit 1 or anything like that.  Easiest way is to do\n>> something like:\n>>\n>> select id into randomizer from maintable order by random();\n>>\n>> then use a cursor to fetch from the table to get \"random\" rows from\n>> the real table.\n>>\n>>\n>\n> Why not just do something like:\n>\n> SELECT thisfield, thatfield\n> FROM my_table\n> WHERE thisfield IS NOT NULL\n> ORDER BY RANDOM()\n> LIMIT 1;\n>\n\nthis works well on small tables. On large tables this query is extremely slow.\n\nregards\nPavel\n\n> Thom\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 14 Oct 2009 09:20:33 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "On Wed, Oct 14, 2009 at 1:20 AM, Pavel Stehule <[email protected]> wrote:\n> 2009/10/14 Thom Brown <[email protected]>:\n>> 2009/10/14 Scott Marlowe <[email protected]>:\n>>>\n>>> If what you're trying to do is emulate a real world app which randomly\n>>> grabs rows, then you want to setup something ahead of time that has a\n>>> pseudo random order and not rely on using anything like order by\n>>> random() limit 1 or anything like that.  Easiest way is to do\n>>> something like:\n>>>\n>>> select id into randomizer from maintable order by random();\n>>>\n>>> then use a cursor to fetch from the table to get \"random\" rows from\n>>> the real table.\n>>>\n>>>\n>>\n>> Why not just do something like:\n>>\n>> SELECT thisfield, thatfield\n>> FROM my_table\n>> WHERE thisfield IS NOT NULL\n>> ORDER BY RANDOM()\n>> LIMIT 1;\n>>\n>\n> this works well on small tables. On large tables this query is extremely slow.\n\nExactly. If you're running that query over and over your \"performance\ntest\" is on how well pgsql can run that very query. :) Anything else\nyou do is likely to be noise by comparison.\n", "msg_date": "Wed, 14 Oct 2009 01:30:56 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" }, { "msg_contents": "2009/10/14 Scott Marlowe <[email protected]>\n\n> On Wed, Oct 14, 2009 at 1:20 AM, Pavel Stehule <[email protected]>\n> wrote:\n> > 2009/10/14 Thom Brown <[email protected]>:\n> >> 2009/10/14 Scott Marlowe <[email protected]>:\n> >> Why not just do something like:\n> >>\n> >> SELECT thisfield, thatfield\n> >> FROM my_table\n> >> WHERE thisfield IS NOT NULL\n> >> ORDER BY RANDOM()\n> >> LIMIT 1;\n> >>\n> >\n> > this works well on small tables. On large tables this query is extremely\n> slow.\n>\n> Exactly. If you're running that query over and over your \"performance\n> test\" is on how well pgsql can run that very query. :) Anything else\n> you do is likely to be noise by comparison.\n>\n>\nWhat I am using often to get a set of random rows is\nSELECT thisfield, thatfield\nFROM my_table\nWHERE random() < rowsneeded::float8/(select count * from my_table);\nOf course it does not give exact number of rows, but close enough for me.\nAs of taking one row I'd try:\nselect * from (\nSELECT thisfield, thatfield\nFROM my_table\nWHERE random() < 100.0/(select count * from my_table))\na order by random() limit 1\n\nI'd say probability of returning no rows is quite low and query can be\nextended even more by returning first row from table in this rare case.\n\n2009/10/14 Scott Marlowe <[email protected]>\nOn Wed, Oct 14, 2009 at 1:20 AM, Pavel Stehule <[email protected]> wrote:\n> 2009/10/14 Thom Brown <[email protected]>:\n>> 2009/10/14 Scott Marlowe <[email protected]>:\n\n>> Why not just do something like:\n>>\n>> SELECT thisfield, thatfield\n>> FROM my_table\n>> WHERE thisfield IS NOT NULL\n>> ORDER BY RANDOM()\n>> LIMIT 1;\n>>\n>\n> this works well on small tables. On large tables this query is extremely slow.\n\nExactly.  If you're running that query over and over your \"performance\ntest\" is on how well pgsql can run that very query. :)  Anything else\nyou do is likely to be noise by comparison.\nWhat I am using often to get a set of random rows is\n\nSELECT thisfield, thatfield\n FROM my_table\n WHERE random() < rowsneeded::float8/(select count * from my_table);Of course it does not give exact number of rows, but close enough for me.As of taking one row I'd try:select * from (SELECT thisfield, thatfield\n\n\n FROM my_table\n\n WHERE random() < 100.0/(select count * from my_table))a order by random() limit 1\nI'd say probability of returning no rows is quite low and query can be extended even more by returning first row from table in this rare case.", "msg_date": "Wed, 14 Oct 2009 18:03:01 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting a random row" } ]
[ { "msg_contents": "Hi\n\nThis query is doing a sequential scan on the child partitions even\nthough indexes on all constrained columns are present\n\nThe box is very lightly loaded (8 core 15K 6x300G Raid 10 disks)\n\n\n\n explain analyze\n select thedate,sent.theboxid_id,sub_box_id,box_num,sum(summcount) as\nevent_count,'ACC'\n from masterdevice_type_daily_warehousedim a,\nmasterdevice_tr_theboxid sent where thedate > '2009-10-06' and box_num\nin (\n select distinct box_num from\nthemodule_type_daily where theboxid like 'val%' and thedate >\ncurrent_timestamp - interval '8 days')\n and a.theboxid_id = sent.theboxid_id and sent.theboxid_id in\n(select theboxid_id from masterdevice_tr_theboxid where theboxid like\n'val%')\n group by thedate,sent.theboxid_id,sub_box_id,box_num;\n\n\nHashAggregate (cost=2066083.46..2066101.39 rows=1434 width=32)\n(actual time=230503.792..230503.979 rows=185 loops=1)\n -> Hash IN Join (cost=1246.83..2066065.54 rows=1434 width=32)\n(actual time=109.903..230257.929 rows=75196 loops=1)\n Hash Cond: (\"outer\".box_num = \"inner\".box_num)\n -> Hash Join (cost=28.86..2063399.16 rows=286815 width=32)\n(actual time=31.437..4619416.805 rows=19430638 loops=1)\n Hash Cond: (\"outer\".theboxid_id = \"inner\".theboxid_id)\n -> Append (cost=1.67..1745007.76 rows=63099210\nwidth=32) (actual time=25.792..-17410926.763 rows=63095432 loops=1)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim a (cost=1.67..14.04 rows=190\nwidth=32) (actual time=0.064..0.064 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_dim_idx1 (cost=0.00..1.67 rows=190 width=0)\n(actual time=0.060..0.060 rows=0 loops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Index Scan using\nmasterdevice_type_daily_warehousedim_2009_10_06_thedate on\nmasterdevice_type_daily_warehousedim_2009_10_06 a (cost=0.00..2.01\nrows=1 width=32) (actual time=22.933..22.933 rows=0 loops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_07 a\n(cost=0.00..296412.20 rows=10698736 width=32) (actual\ntime=2.792..4426510.569 rows=10700096 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_08 a\n(cost=0.00..293246.17 rows=10584814 width=32) (actual\ntime=11.525..22032843.754 rows=10585494 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_09 a\n(cost=0.00..283875.83 rows=10247586 width=32) (actual\ntime=7.859..16916.509 rows=10246536 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_10 a\n(cost=0.00..233267.99 rows=8427839 width=32) (actual\ntime=0.036..4411163.299 rows=8426631 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_11 a\n(cost=0.00..188678.36 rows=6844269 width=32) (actual\ntime=0.042..8808635.489 rows=6845416 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_12 a\n(cost=0.00..224034.12 rows=8123690 width=32) (actual\ntime=0.035..13873.990 rows=8123671 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_13 a\n(cost=0.00..225224.31 rows=8168665 width=32) (actual\ntime=0.047..12855.608 rows=8167588 loops=1)\n Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_14 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.047..0.047 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_14_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.043..0.043 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_15 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.021..0.021 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_15_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_16 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.031..0.031 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_16_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.030..0.030 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_17 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_17_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_18 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_18_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_19 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.023..0.023 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_19_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.022..0.022 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_20 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.024..0.024 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_20_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.023..0.023 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_21 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.018..0.018 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_21_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.016..0.016 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_22 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.019..0.019 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_22_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.017..0.017 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_23 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_23_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_24 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.025..0.025 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_24_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.023..0.023 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_25 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_25_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.018..0.018 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_26 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.029..0.029 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_26_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.028..0.028 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_27 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.019..0.019 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_27_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.017..0.017 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_28 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.052..0.052 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_28_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.050..0.050 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_29 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.019..0.019 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_29_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.017..0.017 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_30 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_30_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.018..0.018 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_31 a (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_31_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.018..0.018 rows=0\nloops=1)\n Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n -> Hash (cost=27.19..27.19 rows=2 width=8) (actual\ntime=5.609..5.609 rows=3 loops=1)\n -> Nested Loop (cost=15.51..27.19 rows=2\nwidth=8) (actual time=5.568..5.597 rows=3 loops=1)\n -> Unique (cost=15.51..15.52 rows=2\nwidth=4) (actual time=0.157..0.165 rows=3 loops=1)\n -> Sort (cost=15.51..15.52 rows=2\nwidth=4) (actual time=0.156..0.158 rows=3 loops=1)\n Sort Key:\nmasterdevice_tr_theboxid.theboxid_id\n -> Seq Scan on\nmasterdevice_tr_theboxid (cost=0.00..15.50 rows=2 width=4) (actual\ntime=0.061..0.065 rows=3 loops=1)\n Filter: ((theboxid)::text\n~~ 'val%'::text)\n -> Index Scan using\nmasterdevice_tr_theboxid_pk on masterdevice_tr_theboxid sent\n(cost=0.00..5.82 rows=1 width=4) (actual time=1.804..1.805 rows=1\nloops=3)\n Index Cond: (sent.theboxid_id =\n\"outer\".theboxid_id)\n -> Hash (cost=1217.97..1217.97 rows=1 width=8) (actual\ntime=70.198..70.198 rows=29 loops=1)\n -> Unique (cost=1217.85..1217.96 rows=1 width=8)\n(actual time=69.909..70.176 rows=29 loops=1)\n -> Sort (cost=1217.85..1217.90 rows=21 width=8)\n(actual time=69.907..70.015 rows=325 loops=1)\n Sort Key: themodule_type_daily.box_num\n -> Bitmap Heap Scan on\nthemodule_type_daily (cost=98.51..1217.39 rows=21 width=8) (actual\ntime=43.474..69.360 rows=325 loops=1)\n Recheck Cond: (thedate > (now() - '8\ndays'::interval))\n Filter: ((theboxid)::text ~~ 'val%'::text)\n -> Bitmap Index Scan on\nthemodule_dn_tr_idx1 (cost=0.00..98.51 rows=4144 width=0) (actual\ntime=25.753..25.753 rows=1230 loops=1)\n Index Cond: (thedate > (now() -\n'8 days'::interval))\n Total runtime: 230512.670 ms\n", "msg_date": "Wed, 14 Oct 2009 15:45:01 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "sequential scan on child partition tables" } ]
[ { "msg_contents": "Hi\n\nThis query is doing a sequential scan on the child partitions even\nthough indexes on all constrained columns are present\n\nThe box is very lightly loaded (8 core 15K 6x300G Raid 10 disks)\n\n\n\n explain analyze\n select thedate,sent.theboxid_id,sub_box_id,box_num,sum(summcount) as\nevent_count,'ACC'\n       from masterdevice_type_daily_warehousedim a,\nmasterdevice_tr_theboxid sent where thedate > '2009-10-06' and box_num\nin (\n                       select distinct box_num from\nthemodule_type_daily where theboxid like 'val%' and thedate >\ncurrent_timestamp - interval '8 days')\n       and a.theboxid_id = sent.theboxid_id and sent.theboxid_id in\n(select theboxid_id from masterdevice_tr_theboxid where theboxid like\n'val%')\n       group by thedate,sent.theboxid_id,sub_box_id,box_num;\n\n\nHashAggregate  (cost=2066083.46..2066101.39 rows=1434 width=32)\n(actual time=230503.792..230503.979 rows=185 loops=1)\n  ->  Hash IN Join  (cost=1246.83..2066065.54 rows=1434 width=32)\n(actual time=109.903..230257.929 rows=75196 loops=1)\n        Hash Cond: (\"outer\".box_num = \"inner\".box_num)\n        ->  Hash Join  (cost=28.86..2063399.16 rows=286815 width=32)\n(actual time=31.437..4619416.805 rows=19430638 loops=1)\n              Hash Cond: (\"outer\".theboxid_id = \"inner\".theboxid_id)\n              ->  Append  (cost=1.67..1745007.76 rows=63099210\nwidth=32) (actual time=25.792..-17410926.763 rows=63095432 loops=1)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim a  (cost=1.67..14.04 rows=190\nwidth=32) (actual time=0.064..0.064 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_dim_idx1  (cost=0.00..1.67 rows=190 width=0)\n(actual time=0.060..0.060 rows=0 loops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Index Scan using\nmasterdevice_type_daily_warehousedim_2009_10_06_thedate on\nmasterdevice_type_daily_warehousedim_2009_10_06 a  (cost=0.00..2.01\nrows=1 width=32) (actual time=22.933..22.933 rows=0 loops=1)\n                          Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_07 a\n(cost=0.00..296412.20 rows=10698736 width=32) (actual\ntime=2.792..4426510.569 rows=10700096 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_08 a\n(cost=0.00..293246.17 rows=10584814 width=32) (actual\ntime=11.525..22032843.754 rows=10585494 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_09 a\n(cost=0.00..283875.83 rows=10247586 width=32) (actual\ntime=7.859..16916.509 rows=10246536 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_10 a\n(cost=0.00..233267.99 rows=8427839 width=32) (actual\ntime=0.036..4411163.299 rows=8426631 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_11 a\n(cost=0.00..188678.36 rows=6844269 width=32) (actual\ntime=0.042..8808635.489 rows=6845416 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_12 a\n(cost=0.00..224034.12 rows=8123690 width=32) (actual\ntime=0.035..13873.990 rows=8123671 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Seq Scan on\nmasterdevice_type_daily_warehousedim_2009_10_13 a\n(cost=0.00..225224.31 rows=8168665 width=32) (actual\ntime=0.047..12855.608 rows=8167588 loops=1)\n                          Filter: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_14 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.047..0.047 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_14_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.043..0.043 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_15 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.021..0.021 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_15_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_16 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.031..0.031 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_16_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.030..0.030 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_17 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_17_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_18 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_18_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_19 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.023..0.023 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_19_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.022..0.022 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_20 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.024..0.024 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_20_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.023..0.023 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_21 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.018..0.018 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_21_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.016..0.016 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_22 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.019..0.019 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_22_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.017..0.017 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_23 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_23_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.019..0.019 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_24 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.025..0.025 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_24_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.023..0.023 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_25 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_25_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.018..0.018 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_26 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.029..0.029 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_26_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.028..0.028 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_27 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.019..0.019 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_27_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.017..0.017 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_28 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.052..0.052 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_28_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.050..0.050 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_29 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.019..0.019 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_29_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.017..0.017 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_30 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_30_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.018..0.018 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                    ->  Bitmap Heap Scan on\nmasterdevice_type_daily_warehousedim_2009_10_31 a  (cost=1.67..14.04\nrows=190 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n                          Recheck Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n                          ->  Bitmap Index Scan on\nmasterdevice_type_daily_warehousedim_2009_10_31_thedate\n(cost=0.00..1.67 rows=190 width=0) (actual time=0.018..0.018 rows=0\nloops=1)\n                                Index Cond: (thedate > '2009-10-06\n00:00:00'::timestamp without time zone)\n              ->  Hash  (cost=27.19..27.19 rows=2 width=8) (actual\ntime=5.609..5.609 rows=3 loops=1)\n                    ->  Nested Loop  (cost=15.51..27.19 rows=2\nwidth=8) (actual time=5.568..5.597 rows=3 loops=1)\n                          ->  Unique  (cost=15.51..15.52 rows=2\nwidth=4) (actual time=0.157..0.165 rows=3 loops=1)\n                                ->  Sort  (cost=15.51..15.52 rows=2\nwidth=4) (actual time=0.156..0.158 rows=3 loops=1)\n                                      Sort Key:\nmasterdevice_tr_theboxid.theboxid_id\n                                      ->  Seq Scan on\nmasterdevice_tr_theboxid  (cost=0.00..15.50 rows=2 width=4) (actual\ntime=0.061..0.065 rows=3 loops=1)\n                                            Filter: ((theboxid)::text\n~~ 'val%'::text)\n                          ->  Index Scan using\nmasterdevice_tr_theboxid_pk on masterdevice_tr_theboxid sent\n(cost=0.00..5.82 rows=1 width=4) (actual time=1.804..1.805 rows=1\nloops=3)\n                                Index Cond: (sent.theboxid_id =\n\"outer\".theboxid_id)\n        ->  Hash  (cost=1217.97..1217.97 rows=1 width=8) (actual\ntime=70.198..70.198 rows=29 loops=1)\n              ->  Unique  (cost=1217.85..1217.96 rows=1 width=8)\n(actual time=69.909..70.176 rows=29 loops=1)\n                    ->  Sort  (cost=1217.85..1217.90 rows=21 width=8)\n(actual time=69.907..70.015 rows=325 loops=1)\n                          Sort Key: themodule_type_daily.box_num\n                          ->  Bitmap Heap Scan on\nthemodule_type_daily  (cost=98.51..1217.39 rows=21 width=8) (actual\ntime=43.474..69.360 rows=325 loops=1)\n                                Recheck Cond: (thedate > (now() - '8\ndays'::interval))\n                                Filter: ((theboxid)::text ~~ 'val%'::text)\n                                ->  Bitmap Index Scan on\nthemodule_dn_tr_idx1  (cost=0.00..98.51 rows=4144 width=0) (actual\ntime=25.753..25.753 rows=1230 loops=1)\n                                      Index Cond: (thedate > (now() -\n'8 days'::interval))\n Total runtime: 230512.670 ms\n", "msg_date": "Wed, 14 Oct 2009 19:31:04 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "sequential scan on child partition tables" }, { "msg_contents": "Anj Adu <[email protected]> writes:\n> This query is doing a sequential scan on the child partitions even\n> though indexes on all constrained columns are present\n\nIt looks to me like it's doing exactly what it is supposed to, ie,\nindexscan on the partitions where it would help and seqscans on the\npartitions where it wouldn't. Indexscan is not better than seqscan\nfor retrieving all or most of a table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Oct 2009 01:15:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequential scan on child partition tables " }, { "msg_contents": "This appears to be a bug in the optimizer with resepct to planning\nqueries involving child partitions. It is clear that \"any\" index is\nbeing ignored even if the selectivity is high. I had to re-write the\nsame query by explicitly \"union-all\" ' ing the queries for individual\npartitions.\n\nOn Wed, Oct 14, 2009 at 11:02 PM, Anj Adu <[email protected]> wrote:\n> That..however is not how the data is distributed...the query is doing\n> a sequential scan on \"every\" partition that is within the date\n> constraint specified...i.e 2009-10-07 thru 2009-10-13......there is no\n> data from 2009-10-14 onwards. The constraints when applied account for\n> less than 25% of the data.\n>\n> When I replace the query with a \"union all\" of all specific\n> partitions..the query runs very quickly. Below is the explain plan for\n> the \"union-all\" version of the query.\n>\n>\n> HashAggregate  (cost=59285.14..59285.93 rows=63 width=56) (actual\n> time=276141.218..276141.378 rows=185 loops=1)\n>   ->  Append  (cost=8496.41..59283.73 rows=63 width=32) (actual\n> time=1012.844..276140.866 rows=185 loops=1)\n>         ->  Subquery Scan \"*SELECT* 1\"  (cost=8496.41..8496.61 rows=9\n> width=32) (actual time=1012.843..1012.910 rows=28 loops=1)\n>               ->  HashAggregate  (cost=8496.41..8496.52 rows=9\n> width=32) (actual time=1012.839..1012.865 rows=28 loops=1)\n>                     ->  Hash Join  (cost=1250.35..8496.29 rows=9\n> width=32) (actual time=97.599..990.893 rows=10316 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1234.84..8470.84\n> rows=1971 width=32) (actual time=97.492..975.741 rows=17602 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=44.352..44.653 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=44.351..44.469\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=33.170..43.848 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=16.999..16.999 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_07 a  (cost=15.90..7227.15 rows=1971\n> width=32) (actual time=26.373..31.319 rows=607 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_07_source_ip_num  (cost=0.00..15.90\n> rows=1971 width=0) (actual time=26.179..26.179 rows=607 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.087..0.087 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.061..0.067 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>         ->  Subquery Scan \"*SELECT* 2\"  (cost=9219.21..9219.44\n> rows=10 width=32) (actual time=54867.609..54867.673 rows=30 loops=1)\n>               ->  HashAggregate  (cost=9219.21..9219.34 rows=10\n> width=32) (actual time=54867.605..54867.636 rows=30 loops=1)\n>                     ->  Hash Join  (cost=1251.08..9219.09 rows=10\n> width=32) (actual time=36.722..54826.062 rows=12975 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1235.58..9192.58\n> rows=2180 width=32) (actual time=36.661..54800.027 rows=19624 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=1.807..2.153 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=1.805..1.939\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=0.739..1.478 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=0.709..0.709 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_08 a  (cost=16.63..7946.27 rows=2180\n> width=32) (actual time=28.162..1888.539 rows=677 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_08_source_ip_num  (cost=0.00..16.63\n> rows=2180 width=0) (actual time=24.044..24.044 rows=677 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.040..0.040 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.029..0.036 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>         ->  Subquery Scan \"*SELECT* 3\"  (cost=8814.11..8814.31 rows=9\n> width=32) (actual time=51634.494..51634.547 rows=24 loops=1)\n>               ->  HashAggregate  (cost=8814.11..8814.22 rows=9\n> width=32) (actual time=51634.490..51634.519 rows=24 loops=1)\n>                     ->  Hash Join  (cost=1250.68..8814.00 rows=9\n> width=32) (actual time=65.658..51599.190 rows=10962 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1235.18..8788.07\n> rows=2067 width=32) (actual time=65.596..51574.491 rows=20261 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=1.781..2.104 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=1.779..1.896\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=0.730..1.455 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=0.699..0.699 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_09 a  (cost=16.23..7543.17 rows=2067\n> width=32) (actual time=27.327..1777.293 rows=699 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_09_source_ip_num  (cost=0.00..16.23\n> rows=2067 width=0) (actual time=23.784..23.784 rows=699 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.040..0.040 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.029..0.036 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>         ->  Subquery Scan \"*SELECT* 4\"  (cost=9686.21..9686.45\n> rows=11 width=32) (actual time=33707.854..33707.900 rows=24 loops=1)\n>               ->  HashAggregate  (cost=9686.21..9686.34 rows=11\n> width=32) (actual time=33707.851..33707.874 rows=24 loops=1)\n>                     ->  Hash Join  (cost=1252.67..9686.07 rows=11\n> width=32) (actual time=37.055..33679.711 rows=7580 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1237.17..9658.71\n> rows=2349 width=32) (actual time=37.001..33662.042 rows=11414 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=1.903..2.273 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=1.901..2.045\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=0.788..1.570 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=0.755..0.755 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_10 a  (cost=18.22..8410.29 rows=2349\n> width=32) (actual time=14.229..1160.004 rows=394 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_10_source_ip_num  (cost=0.00..18.22\n> rows=2349 width=0) (actual time=10.351..10.351 rows=394 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.037..0.037 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.028..0.033 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>         ->  Subquery Scan \"*SELECT* 5\"  (cost=7840.14..7840.32 rows=8\n> width=32) (actual time=27276.689..27276.734 rows=22 loops=1)\n>               ->  HashAggregate  (cost=7840.14..7840.24 rows=8\n> width=32) (actual time=27276.685..27276.713 rows=22 loops=1)\n>                     ->  Hash Join  (cost=1248.86..7840.04 rows=8\n> width=32) (actual time=109.205..27254.938 rows=6134 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1233.35..7815.29\n> rows=1832 width=32) (actual time=109.142..27240.746 rows=9592 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=1.784..2.128 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=1.782..1.922\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=0.724..1.457 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=0.694..0.694 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_11 a  (cost=14.41..6573.33 rows=1832\n> width=32) (actual time=19.473..938.665 rows=331 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_11_source_ip_num  (cost=0.00..14.41\n> rows=1832 width=0) (actual time=16.342..16.342 rows=331 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.039..0.039 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.028..0.034 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>         ->  Subquery Scan \"*SELECT* 6\"  (cost=7850.10..7850.28 rows=8\n> width=32) (actual time=62773.885..62773.954 rows=27 loops=1)\n>               ->  HashAggregate  (cost=7850.10..7850.20 rows=8\n> width=32) (actual time=62773.880..62773.917 rows=27 loops=1)\n>                     ->  Hash Join  (cost=1248.80..7850.00 rows=8\n> width=32) (actual time=62.896..62719.489 rows=15348 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1233.30..7825.34\n> rows=1815 width=32) (actual time=62.814..62687.076 rows=20370 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=1.912..2.330 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=1.912..2.063\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=0.765..1.583 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=0.733..0.733 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_12 a  (cost=14.35..6583.60 rows=1815\n> width=32) (actual time=39.032..2160.324 rows=702 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_12_source_ip_num  (cost=0.00..14.35\n> rows=1815 width=0) (actual time=30.805..30.805 rows=702 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.046..0.046 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.035..0.041 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>         ->  Subquery Scan \"*SELECT* 7\"  (cost=7376.13..7376.31 rows=8\n> width=32) (actual time=44866.931..44866.996 rows=30 loops=1)\n>               ->  HashAggregate  (cost=7376.13..7376.23 rows=8\n> width=32) (actual time=44866.927..44866.969 rows=30 loops=1)\n>                     ->  Hash Join  (cost=1248.32..7376.03 rows=8\n> width=32) (actual time=77.172..44826.884 rows=11881 loops=1)\n>                           Hash Cond: (\"outer\".sentryid_id =\n> \"inner\".sentryid_id)\n>                           ->  Nested Loop  (cost=1232.81..7352.06\n> rows=1677 width=32) (actual time=77.098..44803.266 rows=16665 loops=1)\n>                                 ->  Unique  (cost=1218.94..1219.05\n> rows=1 width=8) (actual time=1.816..2.149 rows=29 loops=1)\n>                                       ->  Sort\n> (cost=1218.94..1219.00 rows=21 width=8) (actual time=1.815..1.940\n> rows=307 loops=1)\n>                                             Sort Key: ssa_tr_dy.source_ip_num\n>                                             ->  Bitmap Heap Scan on\n> ssa_tr_dy  (cost=98.52..1218.48 rows=21 width=8) (actual\n> time=0.718..1.491 rows=307 loops=1)\n>                                                   Recheck Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                                   Filter:\n> ((sentryid)::text ~~ 'edmc%'::text)\n>                                                   ->  Bitmap Index\n> Scan on ssa_dn_tr_idx1  (cost=0.00..98.52 rows=4148 width=0) (actual\n> time=0.687..0.687 rows=1230 loops=1)\n>                                                         Index Cond:\n> (firstoccurrence > (now() - '8 days'::interval))\n>                                 ->  Bitmap Heap Scan on\n> pix_tr_dy_dimension_2009_10_13 a  (cost=13.87..6112.04 rows=1677\n> width=32) (actual time=27.864..1543.963 rows=575 loops=29)\n>                                       Recheck Cond: (a.source_ip_num\n> = \"outer\".source_ip_num)\n>                                       ->  Bitmap Index Scan on\n> pix_tr_dy_dimension_2009_10_13_source_ip_num  (cost=0.00..13.87\n> rows=1677 width=0) (actual time=23.339..23.339 rows=575 loops=29)\n>                                             Index Cond:\n> (a.source_ip_num = \"outer\".source_ip_num)\n>                           ->  Hash  (cost=15.50..15.50 rows=2\n> width=4) (actual time=0.050..0.050 rows=3 loops=1)\n>                                 ->  Seq Scan on pix_tr_sentryid sent\n> (cost=0.00..15.50 rows=2 width=4) (actual time=0.039..0.044 rows=3\n> loops=1)\n>                                       Filter: ((sentryid)::text ~~\n> 'edmc%'::text)\n>  Total runtime: 276143.006 ms\n>\n>\n> On Wed, Oct 14, 2009 at 10:15 PM, Tom Lane <[email protected]> wrote:\n>> Anj Adu <[email protected]> writes:\n>>> This query is doing a sequential scan on the child partitions even\n>>> though indexes on all constrained columns are present\n>>\n>> It looks to me like it's doing exactly what it is supposed to, ie,\n>> indexscan on the partitions where it would help and seqscans on the\n>> partitions where it wouldn't.  Indexscan is not better than seqscan\n>> for retrieving all or most of a table.\n>>\n>>                        regards, tom lane\n>>\n>\n", "msg_date": "Thu, 15 Oct 2009 13:51:45 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequential scan on child partition tables" }, { "msg_contents": "On Thu, Oct 15, 2009 at 2:51 PM, Anj Adu <[email protected]> wrote:\n> This appears to be a bug in the optimizer with resepct to planning\n> queries involving child partitions. It is clear that \"any\" index is\n> being ignored even if the selectivity is high. I had to re-write the\n> same query by explicitly \"union-all\" ' ing  the queries for individual\n> partitions.\n\nSo, did adjusting cost parameters help at all?\n", "msg_date": "Sat, 17 Oct 2009 23:08:46 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequential scan on child partition tables" }, { "msg_contents": "The actual data returned is a tiny fraction of the total table volume.\n\nIs there a way to force an index scan on the partitions in a\nguaranteed manner without resorting to re-writing queries with the\nunion all on partitions.\n\nThank you\n\nSriram\n\nOn Wed, Oct 14, 2009 at 10:15 PM, Tom Lane <[email protected]> wrote:\n> Anj Adu <[email protected]> writes:\n>> This query is doing a sequential scan on the child partitions even\n>> though indexes on all constrained columns are present\n>\n> It looks to me like it's doing exactly what it is supposed to, ie,\n> indexscan on the partitions where it would help and seqscans on the\n> partitions where it wouldn't.  Indexscan is not better than seqscan\n> for retrieving all or most of a table.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Sun, 18 Oct 2009 08:24:19 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequential scan on child partition tables" } ]
[ { "msg_contents": "Hi chaps,\n\nCan anyone recommend a decent server vendor in the UK?\n\nI'm looking to deploy a new machine to handle some of our non-critical data, and I'm just wondering if I can avoid the pains I've had with dell hardware recently.\n\nAlso whilst I'm asking, does anyone else find their dell BMC/DRAC interfaces just die from time to time?\n\nGlyn\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Thu, 15 Oct 2009 02:34:54 -0700 (PDT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "[OT] Recommended whitebox server vendors in the UK?" } ]
[ { "msg_contents": "Hi ,\n\nIn my project I use Nhibernate to connect to Postgres.\nCurrently it is very slow.\n\nI have used *Yourkit* profiller to get some key informaitons.\nIn CPU profilling i have analyzed following things : -\nLet me explain one by one : -\n\n1- NHibernate.Impl.SessionImpl.DoLoad(Type, Object, Object, LockMode,\nBoolean)\n\nTakes lots of time .\n\n2- Internally it calls thse function i am listing the last stack\n\nNpgsqlConnector.Open()\nNpgsql.NpgsqlConnectedState.Startup(NpgsqlConnector)\nNpgsql.NpgsqlState.ProcessBackendResponses(NpgsqlConnector)\n*[Wall Time] System.Net.Sockets.Socket.Poll(Int32, SelectMode)*\n\nFinally in last the socket.poll takes most of the time .\n\n\nI want to know the probably causes of the socket.poll() consumes allot of\ntime .\n\nPlease help me out to know the places why in DoLoad,DoLoadByClass and\nSocket.Poll is taking lot of time .\n\nWhat are the scenario in which it might be getting slow down , Which i need\nto look .\n\n\n-- \nThanks,\nKeshav Upadhyaya\n\nHi , In my project I use Nhibernate to connect to Postgres. Currently it is very slow. I have used Yourkit profiller to get some key informaitons.In CPU profilling i have analyzed following  things : -\nLet me explain one by one : -1- NHibernate.Impl.SessionImpl.DoLoad(Type, Object, Object, LockMode, Boolean) Takes lots of time . 2- Internally it calls thse function i am listing the last stack \nNpgsqlConnector.Open()Npgsql.NpgsqlConnectedState.Startup(NpgsqlConnector)Npgsql.NpgsqlState.ProcessBackendResponses(NpgsqlConnector)[Wall Time]  System.Net.Sockets.Socket.Poll(Int32, SelectMode)\nFinally in last the socket.poll takes most of the time . I want to know the probably causes of the socket.poll() consumes allot of time . Please help me out to know the places why in DoLoad,DoLoadByClass and Socket.Poll is taking lot of time .\nWhat are the scenario in which it might be getting slow down , Which i need to look . -- Thanks,Keshav Upadhyaya", "msg_date": "Thu, 15 Oct 2009 19:32:52 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding facing lot of time Consumed by Socket.Poll()" }, { "msg_contents": "On Thu, 15 Oct 2009, keshav upadhyaya wrote:\n> [Wall Time]  System.Net.Sockets.Socket.Poll(Int32, SelectMode)\n\nRTFM. Socket.Poll *waits* for a socket. Obviously it's going to spend \nquite a bit of time.\n\nNote that it is \"wall time\", not \"CPU time\".\n\nYou would be better investigating whatever is at the other end of the \nsocket, which I presume is Postgres. Look at what the queries actually \nare, and try EXPLAIN ANALYSE on a few.\n\nMatthew\n\n-- \n It is better to keep your mouth closed and let people think you are a fool\n than to open it and remove all doubt. -- Mark Twain", "msg_date": "Thu, 15 Oct 2009 15:09:11 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Regarding facing lot of time Consumed by Socket.Poll()" }, { "msg_contents": "Thanks Matthew for your quick reply .\nLet me make my self more clear.\n\nSuppose While doing one operation in my project, total time taken is 41\nmin .\n\nin this 41 min around 35 min is takne by this call --\n\nNHibernate.Impl.SessionImpl.DoLoad(Type, Object, Object, LockMode, Boolean)\nand DoLoadbyClass().\n\nAnd i believe internally these calls used\nSystem.Net.Sockets.Socket.Poll(Int32, SelectMode)\n\nWhich take most of the time not a bit* time .\n\nAnd when i Use *MSSQL *no such kind of polling happens and it work in aound\n2-3 mins .\n\nSo i believe Nhibernate config filles or some other configuration w.r.t.\nPostgres is not proper or improvement required.\n\nThanks,\nkeshav\n\n\n\nOn Thu, Oct 15, 2009 at 7:39 PM, Matthew Wakeling <[email protected]> wrote:\n\n> On Thu, 15 Oct 2009, keshav upadhyaya wrote:\n>\n>> [Wall Time] System.Net.Sockets.Socket.Poll(Int32, SelectMode)\n>>\n>\n> RTFM. Socket.Poll *waits* for a socket. Obviously it's going to spend quite\n> a bit of time.\n>\n> Note that it is \"wall time\", not \"CPU time\".\n>\n> You would be better investigating whatever is at the other end of the\n> socket, which I presume is Postgres. Look at what the queries actually are,\n> and try EXPLAIN ANALYSE on a few.\n>\n> Matthew\n>\n> --\n> It is better to keep your mouth closed and let people think you are a fool\n> than to open it and remove all doubt. -- Mark Twain\n\n\n\n\n-- \nThanks,\nKeshav Upadhyaya\n\nThanks Matthew for your quick reply . Let me make my self more clear. Suppose  While doing one operation in my project,  total time taken is 41 min . in this 41 min around 35 min is takne by this call --\nNHibernate.Impl.SessionImpl.DoLoad(Type, Object, Object, LockMode, Boolean)and DoLoadbyClass(). And i believe internally these calls used System.Net.Sockets.Socket.Poll(Int32, SelectMode)Which take most of the time not a bit* time . \nAnd when i Use MSSQL no such kind of polling happens and it work in aound 2-3 mins . So i believe Nhibernate config filles or some other configuration w.r.t. Postgres  is not proper or improvement required. \nThanks,keshav  On Thu, Oct 15, 2009 at 7:39 PM, Matthew Wakeling <[email protected]> wrote:\nOn Thu, 15 Oct 2009, keshav upadhyaya wrote:\n\n[Wall Time]  System.Net.Sockets.Socket.Poll(Int32, SelectMode)\n\n\nRTFM. Socket.Poll *waits* for a socket. Obviously it's going to spend quite a bit of time.\n\nNote that it is \"wall time\", not \"CPU time\".\n\nYou would be better investigating whatever is at the other end of the socket, which I presume is Postgres. Look at what the queries actually are, and try EXPLAIN ANALYSE on a few.\n\nMatthew\n\n-- \nIt is better to keep your mouth closed and let people think you are a fool\nthan to open it and remove all doubt.                  -- Mark Twain-- Thanks,Keshav Upadhyaya", "msg_date": "Thu, 15 Oct 2009 20:10:00 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Regarding facing lot of time Consumed by Socket.Poll()" }, { "msg_contents": "keshav upadhyaya wrote:\n> 2- Internally it calls thse function i am listing the last stack\n> \n> NpgsqlConnector.Open()\n> Npgsql.NpgsqlConnectedState.Startup(NpgsqlConnector)\n> Npgsql.NpgsqlState.ProcessBackendResponses(NpgsqlConnector)\n> *[Wall Time] System.Net.Sockets.Socket.Poll(Int32, SelectMode)*\n> \n> Finally in last the socket.poll takes most of the time .\n> \n> \n> I want to know the probably causes of the socket.poll() consumes allot of\n> time .\n\nI don't know much about Npgsql driver, but I'd guess that it's spending\na lot of time on Socket.Poll, because it's waiting for a response from\nthe server, sleeping. If you're investigating this because you feel that\nqueries are running too slowly, you should look at what the queries are\nand investigate why they're slow in the server, e.g with EXPLAIN\nANALYZE. If you're investigating this because you're seeing high CPU\nload in the client, try finding an option in the profiler to measure CPU\ntime, not Wall time.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 15 Oct 2009 23:50:21 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Regarding facing lot of time Consumed by Socket.Poll()" }, { "msg_contents": "Hi heikki,\n\nFirst of all a big thanks for your reply .\n\n From server side query are not taking much time I have checked that .\n\nBut from the client side when i am executing Nhibernate.doload() ,\ndoloadbyclass() functions\nit is taking much of the CPU time .\nThis i have analyzed by YOURKIT profiler for .Net applications .\nMost of the CPU time is taken by Nhibernate.doload() , doloadbyclass()\nfunctions and i believe internally they are\ncalling System.Net.Sockets.Socket.Poll(Int32, SelectMode) function .\n\nSo my big worry is why Nhibernate.DoLoad() , DoLoadByClass is taking much of\nthe time ?\nIs there any Nhibernate related config file got changed ? or what are the\nmost probale places for this problem so that i can look for them. *\n\n*\nApart from this i want to ask one more question .\n\nIn one machine it calls NPGSQLconnectorpool.*GetNonpooledConnector()*\nand in other machien it callse NPGSQLconnectorpool.*GepooledConnector()\n\nDoes calling pool and nonpool version of methods make a big difference ?\n\nThanks ,\nKeshav\n**\n**\n*\n\nOn Fri, Oct 16, 2009 at 2:20 AM, Heikki Linnakangas <\[email protected]> wrote:\n\n> keshav upadhyaya wrote:\n> > 2- Internally it calls thse function i am listing the last stack\n> >\n> > NpgsqlConnector.Open()\n> > Npgsql.NpgsqlConnectedState.Startup(NpgsqlConnector)\n> > Npgsql.NpgsqlState.ProcessBackendResponses(NpgsqlConnector)\n> > *[Wall Time] System.Net.Sockets.Socket.Poll(Int32, SelectMode)*\n> >\n> > Finally in last the socket.poll takes most of the time .\n> >\n> >\n> > I want to know the probably causes of the socket.poll() consumes allot of\n> > time .\n>\n> I don't know much about Npgsql driver, but I'd guess that it's spending\n> a lot of time on Socket.Poll, because it's waiting for a response from\n> the server, sleeping. If you're investigating this because you feel that\n> queries are running too slowly, you should look at what the queries are\n> and investigate why they're slow in the server, e.g with EXPLAIN\n> ANALYZE. If you're investigating this because you're seeing high CPU\n> load in the client, try finding an option in the profiler to measure CPU\n> time, not Wall time.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\n\n\n-- \nThanks,\nKeshav Upadhyaya\n\nHi heikki, First of all a big thanks for your reply . From server side query are not taking much time I have checked that . But from the client side when i am executing Nhibernate.doload() , doloadbyclass() functions \nit is taking much of the CPU time . This i have analyzed by YOURKIT profiler for .Net applications  . Most of the CPU time is taken by Nhibernate.doload() , doloadbyclass() functions  and i believe internally they are \ncalling System.Net.Sockets.Socket.Poll(Int32, SelectMode) function . So my big worry is why Nhibernate.DoLoad() , DoLoadByClass is taking much of the time ?Is there any Nhibernate related config file  got changed ? or what are the most probale places for this problem so that i can look for them. \nApart from this i want to ask one more question . In one machine it calls  NPGSQLconnectorpool.GetNonpooledConnector()and in other machien  it callse  NPGSQLconnectorpool.GepooledConnector()\nDoes calling pool and nonpool version of methods make a big difference ?Thanks ,Keshav On Fri, Oct 16, 2009 at 2:20 AM, Heikki Linnakangas <[email protected]> wrote:\nkeshav upadhyaya wrote:\n> 2- Internally it calls thse function i am listing the last stack\n>\n> NpgsqlConnector.Open()\n> Npgsql.NpgsqlConnectedState.Startup(NpgsqlConnector)\n> Npgsql.NpgsqlState.ProcessBackendResponses(NpgsqlConnector)\n> *[Wall Time]  System.Net.Sockets.Socket.Poll(Int32, SelectMode)*\n>\n> Finally in last the socket.poll takes most of the time .\n>\n>\n> I want to know the probably causes of the socket.poll() consumes allot of\n> time .\n\nI don't know much about Npgsql driver, but I'd guess that it's spending\na lot of time on Socket.Poll, because it's waiting for a response from\nthe server, sleeping. If you're investigating this because you feel that\nqueries are running too slowly, you should look at what the queries are\nand investigate why they're slow in the server, e.g with EXPLAIN\nANALYZE. If you're investigating this because you're seeing high CPU\nload in the client, try finding an option in the profiler to measure CPU\ntime, not Wall time.\n\n--\n  Heikki Linnakangas\n  EnterpriseDB   http://www.enterprisedb.com\n-- Thanks,Keshav Upadhyaya", "msg_date": "Fri, 16 Oct 2009 11:54:44 +0530", "msg_from": "keshav upadhyaya <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Regarding facing lot of time Consumed by Socket.Poll()" }, { "msg_contents": "On Thu, Oct 15, 2009 at 10:02 AM, keshav upadhyaya\n<[email protected]> wrote:\n> Hi ,\n>\n> In my project I use Nhibernate to connect to Postgres.\n> Currently it is very slow.\n>\n> I have used Yourkit profiller to get some key informaitons.\n> In CPU profilling i have analyzed following  things : -\n> Let me explain one by one : -\n>\n> 1- NHibernate.Impl.SessionImpl.DoLoad(Type, Object, Object, LockMode,\n> Boolean)\n>\n> Takes lots of time .\n>\n> 2- Internally it calls thse function i am listing the last stack\n>\n> NpgsqlConnector.Open()\n> Npgsql.NpgsqlConnectedState.Startup(NpgsqlConnector)\n> Npgsql.NpgsqlState.ProcessBackendResponses(NpgsqlConnector)\n> [Wall Time]  System.Net.Sockets.Socket.Poll(Int32, SelectMode)\n>\n> Finally in last the socket.poll takes most of the time .\n>\n>\n> I want to know the probably causes of the socket.poll() consumes allot of\n> time .\n>\n> Please help me out to know the places why in DoLoad,DoLoadByClass and\n> Socket.Poll is taking lot of time .\n>\n> What are the scenario in which it might be getting slow down , Which i need\n> to look .\n\nI'm not sure that you're going to get too much help with this one on\nthis mailing list. It's not really a PostgreSQL question. You might\ntry the npgsql guys...\n\n...Robert\n", "msg_date": "Sat, 17 Oct 2009 06:56:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Regarding facing lot of time Consumed by Socket.Poll()" } ]
[ { "msg_contents": "Helo everbody!\n\nI need to know how much the postgres is going to disk to get blocks and how much it is going to cache? witch is the statistic table and what is the field that indicates blocks reads from the disk and the memory cache?\n\nAnother question is, what is the best memory configuration to keep more data in cache? \n\nThanks,\n\nWaldomiro\n", "msg_date": "Fri, 16 Oct 2009 00:33:47 -0300", "msg_from": "\"=?ISO-8859-1?Q?waldomiro?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "There is a statistic table?" }, { "msg_contents": "On Thu, Oct 15, 2009 at 9:33 PM, waldomiro <[email protected]> wrote:\n> Helo everbody!\n>\n> I need to know how much the postgres is going to disk to get blocks and how much it is going to cache? witch is the statistic table and what is the field that indicates blocks reads from the disk and the memory cache?\n\nYep. Use psql to access postgres:\n\npsql dbnamehere\n\\d pg_stat<tab><tab>\n\nand you should get a list like:\n\npg_stat_activity pg_statio_all_indexes\npg_statio_sys_tables pg_statistic_relid_att_index\npg_stat_user_tables\npg_stat_all_indexes pg_statio_all_sequences\npg_statio_user_indexes pg_stats\npg_stat_all_tables pg_statio_all_tables\npg_statio_user_sequences pg_stat_sys_indexes\npg_stat_bgwriter pg_statio_sys_indexes\npg_statio_user_tables pg_stat_sys_tables\npg_stat_database pg_statio_sys_sequences\npg_statistic pg_stat_user_indexes\n\njust select * from them and you can get an idea what is stored.\nInteresting ones right off the bat are:\n\npg_stat_user_tables\npg_stat_user_indexes\npg_stat_all_tables\npg_stat_all_indexes\n\nbut feel free to look around.\n\n> Another question is, what is the best memory configuration to keep more data in cache?\n\nOS or pgsql cache? It's generally better to let the OS do the\nmajority of caching unless you are sure you can pin shared_buffers in\nmemory, since allocating too much to shared_buffers may result in\nunused portions getting swapped out by some OSes which have aggressive\nswapping behaviour. Set shared_buggers to 2G or 1/4 of memory\nwhichever is smaller to start with, then monitor and adjust from\nthere.\n", "msg_date": "Thu, 15 Oct 2009 22:27:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: There is a statistic table?" }, { "msg_contents": "waldomiro wrote:\n> I need to know how much the postgres is going to disk to get \n> blocks and how much it is going to cache? witch is the \n> statistic table and what is the field that indicates blocks \n> reads from the disk and the memory cache?\n\nThe view pg_statio_all_tables will show you the number of\ndisk reads and buffer hits per table.\n\nThere are other statistics views, see\nhttp://www.postgresql.org/docs/8.4/static/monitoring-stats.html#MONITORING-STATS-VIEWS\n\n> Another question is, what is the best memory configuration to \n> keep more data in cache? \n\nThat's easy - the greater shared_buffers is, the more cache you have.\n\nAnother option is to choose shared_buffers not too large and let\nthe filesystem cache buffer the database for you.\n\nYours,\nLaurenz Albe\n", "msg_date": "Fri, 16 Oct 2009 08:27:58 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: There is a statistic table?" }, { "msg_contents": "\n\n\nOn 10/15/09 11:27 PM, \"Albe Laurenz\" <[email protected]> wrote:\n\n> waldomiro wrote:\n>> I need to know how much the postgres is going to disk to get\n>> blocks and how much it is going to cache? witch is the\n>> statistic table and what is the field that indicates blocks\n>> reads from the disk and the memory cache?\n> \n> The view pg_statio_all_tables will show you the number of\n> disk reads and buffer hits per table.\n\nMy understanding is that it will not show that. Since postgres can't\ndistinguish between a read that comes from OS cache and one that goes to\ndisk, you're out of luck on knowing anything exact.\nThe above shows what comes from shared_buffers versus the OS, however. And\nif reads are all buffered, they are not coming from disk. Only those that\ncome from the OS _may_ have come from disk.\n\n> \n> There are other statistics views, see\n> http://www.postgresql.org/docs/8.4/static/monitoring-stats.html#MONITORING-STA\n> TS-VIEWS\n> \n>> Another question is, what is the best memory configuration to\n>> keep more data in cache?\n> \n> That's easy - the greater shared_buffers is, the more cache you have.\n> \n> Another option is to choose shared_buffers not too large and let\n> the filesystem cache buffer the database for you.\n> \n> Yours,\n> Laurenz Albe\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 21 Oct 2009 10:17:47 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: There is a statistic table?" }, { "msg_contents": "On Wed, Oct 21, 2009 at 11:17 AM, Scott Carey <[email protected]> wrote:\n>\n>\n>\n> On 10/15/09 11:27 PM, \"Albe Laurenz\" <[email protected]> wrote:\n>\n>> waldomiro wrote:\n>>> I need to know how much the postgres is going to disk to get\n>>> blocks and how much it is going to cache? witch is the\n>>> statistic table and what is the field that indicates blocks\n>>> reads from the disk and the memory cache?\n>>\n>> The view pg_statio_all_tables will show you the number of\n>> disk reads and buffer hits per table.\n>\n> My understanding is that it will not show that.  Since postgres can't\n> distinguish between a read that comes from OS cache and one that goes to\n> disk, you're out of luck on knowing anything exact.\n> The above shows what comes from shared_buffers versus the OS, however.  And\n> if reads are all buffered, they are not coming from disk.  Only those that\n> come from the OS _may_ have come from disk.\n\nI think he meant pg's shared_buffers not the OS kernel cache.\n", "msg_date": "Wed, 21 Oct 2009 16:06:10 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: There is a statistic table?" }, { "msg_contents": "Le jeudi 22 octobre 2009 00:06:10, Scott Marlowe a écrit :\n> On Wed, Oct 21, 2009 at 11:17 AM, Scott Carey <[email protected]> \nwrote:\n> > On 10/15/09 11:27 PM, \"Albe Laurenz\" <[email protected]> wrote:\n> >> waldomiro wrote:\n> >>> I need to know how much the postgres is going to disk to get\n> >>> blocks and how much it is going to cache? witch is the\n> >>> statistic table and what is the field that indicates blocks\n> >>> reads from the disk and the memory cache?\n> >>\n> >> The view pg_statio_all_tables will show you the number of\n> >> disk reads and buffer hits per table.\n> >\n> > My understanding is that it will not show that. Since postgres can't\n> > distinguish between a read that comes from OS cache and one that goes to\n> > disk, you're out of luck on knowing anything exact.\n> > The above shows what comes from shared_buffers versus the OS, however.\n> > And if reads are all buffered, they are not coming from disk. Only\n> > those that come from the OS _may_ have come from disk.\n> \n> I think he meant pg's shared_buffers not the OS kernel cache.\n> \n\npgfincore let you know if block are in OS kernel cache or not. \n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Thu, 22 Oct 2009 13:04:57 +0200", "msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: There is a statistic table?" } ]
[ { "msg_contents": "\nHi - \n I'm stuck on a query performance issue, and I would sincerely appreciate\nany server setting/ query improvement suggestions. I am attempting to\nretrieve records stored in 20 tables that are related to a one or more\nrecords in a single table (let's call the 20 tables `data tables` and the\nsingle table the `source table`). Each data table is partitioned and\ninherits directly from a master table (each partition holds about 200,000\nrecords and there are 60 partitions per master table -- the query is\nexpected to return ~20 records/partition ). The source table is not\npartitioned. Each data table partition has a primary key that is identical\nto a column of the source table (each data table partition has a foreign key\nconstraint on the source table's primary key -- Note that the foreign key\nconstraint is implicit in certain cases -- i.e. data_table_01 has a foreign\nkey on data_table_02 that, in turn, has a foreign key on the source_table). \n When I attempt to produce a result set from the data tables using a\nsubquery of the source table, the query takes ~60-200sec to run for a single\nrow returned from the source table (time increases exponentially with\nadditional subquery results). \n I perform a VACUUM ANALYZE nightly, attempted to set the\n`join_collapse_limit` to '1' and increased the statistics collection on the\ndata table partition primary keys to 1000. I am running Postgres 8.3 on\nFedora 10 using an Intel chipset (four processor cores) and 6Gb of RAM --\nthere are a maximum of 10 concurrent connections on the database at any\ngiven time. I've attached some descriptive information below:\n\n The fastest query I have been able to manage is the following\n(abridged):\n\n SELECT * \n FROM \n ( SELECT \"<column>\" \n FROM <source_table> \n WHERE \"<some_column>\" = '<somecondition>' \n ) AS source_table\n LEFT JOIN <data_table_01> USING (\"<primary_key>\")\n LEFT JOIN <data_table_02> USING (\"<primary_key>\")\n ...\n LEFT JOIN <data_table_19> USING (\"<primary_key>\")\n LEFT JOIN <data_table_20> USING (\"<primary_key>\")\n ;\n\nThe following are the non-default parameters in postgres.conf:\n max_locks_per_transaction = 2056 \n shared_buffers = 128MB\n max_fsm_pages = 204800\n max_fsm_relations = 3000\n constraint_exclusion = on\n \n I've attached an abridged query plan below (abridged for the index scan\nresults for each of the partition tables -- the total time to scan the\npartition tables is embedded in the \"append\" actual cost time, which is\ngenerally <5ms excluding the subquery). To summarize, it appears that each\ndata_table identifies the relevant rows very quickly (total of all `appends`\nare ~1.2 seconds , but each join is slow (2-5 seconds for *each* join --\nmaking 20*[2-5 sec]= [40-100 sec] for the joins). Note that\n`join_collapse_limit` is set to 1 on this session (also note the estimated\ncosts by the query plan appear a bit large). \n----------------------------------------\n Nested Loop Left Join \n(cost=0.00..3855698559605789324702789208529861500799141255212203379626060432086506635309749052112896.00\nrows=311931683242245219905715698384821128847649568166127853072768749671189521273717781287665664\nwidth=8884) (actual time=804.729..54752.177 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_20>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..107594618658011016223736297174365328662423744442156697061527790311709493896861974528.00\nrows=8704464966056878513204190237608265494011078822923959805624806664887393418376971812864\nwidth=8532) (actual time=785.966..52705.399 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_19>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..3002477536026971680044620678533152337542410774574698677807503124668875305648128.00\nrows=242900749928754345845996721555781173233674623132666614323075798898771156715700224\nwidth=8180) (actual time=739.681..50493.225 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_18>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..83787067396008347529564745762189126147117204941577957444468815071220334592.00\nrows=6778239377396060082196394544307189661486516189900352952164276255864001658880\nwidth=7828) (actual time=725.434..48401.074 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_17>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..2338137489489500761700457981791537754699538979985950749900036728946688.00\nrows=189154115739814280777160516793297638614094710485086175961401125362466816\nwidth=7476) (actual time=714.956..46243.094 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_16>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..65247852053209794932032080731091779649041507376730152396969213952.00\nrows=5278474735096636176089465265387923568528412860735200833430607626240\nwidth=7124) (actual time=693.772..43828.871 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_15>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..1820849620634231609086388526672297318981782459980059921874944.00\nrows=147300242643928143660653163557330700833114104062501442583265280\nwidth=6772) (actual time=683.211..41744.448 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_14>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..50814256409136028027737552880521987178900164286101848064.00\nrows=4110679017025470954778089725832553293993488240233704062976 width=6388)\n(actual time=666.928..39803.912 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_13>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..1418011137688622742268484288103546481505440632930304.00\nrows=114716154195056436845646872605995872908472110832156672 width=6004)\n(actual time=646.420..37732.637 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_12>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..39574524842926171903719303887283825536044892160.00\nrows=3201237512747515109467771442088115336887389913088 width=5620) (actual\ntime=617.556..32691.208 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_11>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..1104461658892753285177242217114991400583168.00\nrows=89340995439021908020325536097950548241678336 width=5180) (actual\ntime=605.788..30405.082 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_10>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..30823820972139181588887753190839156736.00\nrows=2493369520530492561237378929345605664768 width=4740) (actual\ntime=590.691..28256.296 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_09>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..860253694217667941856909279100928.00\nrows=69586043117709763748119584889110528 width=4300) (actual\ntime=562.560..23995.033 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_08>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..24008376616267968521992929280.00\nrows=1942051706828399753701512708096 width=3860) (actual\ntime=522.649..21813.935 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_07>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..670029782636971643895808.00 rows=54199746334871823453257728\nwidth=3420) (actual time=499.865..18377.510 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_06>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..18700458482955792384.00 rows=1512614519963725594624 width=2980)\n(actual time=469.866..14939.783 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_05>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..521929130617911.38 rows=42217091058335856 width=2500) (actual\ntime=457.911..12760.616 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_04>.\"<primary_key>\")\n -> Nested Loop Left Join \n(cost=0.00..14566671439.99 rows=1178275124662 width=2020) (actual\ntime=446.057..10476.572 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\" =\n<data_table_03>.\"<primary_key>\")\n -> Nested Loop Left Join (cost=0.00..407379.92\nrows=32884816 width=1540) (actual time=412.273..5911.776 rows=953 loops=1)\n Join Filter: (<data_table_01>.\"<primary_key>\"\n= <data_table_02>.\"<primary_key>\")\n -> Nested Loop Left Join (cost=0.00..522.73\nrows=918 width=244) (actual time=380.108..1176.906 rows=953 loops=1)\n Join Filter: (<source_table>.\"<column>\" =\n<data_table_01>.\"<primary_key>\")\n -> Index Scan using\n\"<source_table_column_index>\" on <source_table> (cost=0.00..8.27 rows=1\nwidth=40) (actual time=0.104..0.106 rows=1 loops=1)\n Index Cond: (\"<some_column>\" =\n<somecondition>::bigint)\n -> Append (cost=0.00..513.69 rows=62\nwidth=212) (actual time=379.923..1174.640 rows=953 loops=1)\n -> Append (cost=0.00..442.42 rows=62\nwidth=1304) (actual time=4.720..4.917 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=488)\n(actual time=4.590..4.738 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=488)\n(actual time=2.181..2.341 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=488)\n(actual time=2.077..2.232 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=448)\n(actual time=3.380..3.550 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=448)\n(actual time=3.399..3.547 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=448)\n(actual time=2.060..2.230 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=448) (actual\ntime=4.254..4.410 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=448) (actual\ntime=2.025..2.193 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=448) (actual\ntime=2.131..2.336 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=392) (actual\ntime=5.043..5.228 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=392) (actual\ntime=1.941..2.110 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=392) (actual\ntime=1.788..1.973 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=360) (actual\ntime=1.933..2.122 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=360) (actual\ntime=2.318..2.469 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=360) (actual\ntime=2.055..2.199 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=360) (actual\ntime=.984..2.129 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=360) (actual\ntime=2.14..2.256 rows=1 loops=953)\n -> Append (cost=0.00..442.17 rows=62 width=360) (actual\ntime=1.91..2.081 rows=1 loops=953)\n Total runtime: 54819.860 ms\n\nThanks in advance - \n\nWill\n-- \nView this message in context: http://www.nabble.com/Improving-join-performance-over-multiple-moderately-wide-tables-tp25932408p25932408.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 16 Oct 2009 14:12:24 -0700 (PDT)", "msg_from": "miller_2555 <[email protected]>", "msg_from_op": true, "msg_subject": "Improving join performance over multiple moderately wide\n tables" }, { "msg_contents": "On Fri, Oct 16, 2009 at 5:12 PM, miller_2555\n<[email protected]> wrote:\n>  [...snip...] attempted to set the `join_collapse_limit` to '1' [...]\n\nThat seems like an odd thing to do - why did you do this? What\nhappens if you don't?\n\nI have never seen anything like the bizarrely large row estimates that\nyou have here. Any chance you can extract a standalone, reproducible\ntest case?\n\n...Robert\n", "msg_date": "Sun, 18 Oct 2009 20:49:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving join performance over multiple moderately wide tables" } ]
[ { "msg_contents": "Folks,\n\nWe have just migrated from Oracle to PG.\n\nWe have a database that has approx 3 mil rows and one of the columns has a cardinality\nof only 0.1% (3000 unique values). \n\nWe have to issue several queries that use this low cardinality column in a WHERE clause\nas well as see this column participating in JOINS (ouch!).\n\nA regular B-Tree index has been created on these columns.\n\nIn Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw performance go\nthrough the roof. I know Postgres does not have Bitmap indexes,\nbut is there a reasonable alternative to boost performance in situations where low cardinality\ncolumns are involved ?\n\nI dont have the option of changing schemas - so please dont go there :)\n\nTIA,\nVK\n\nFolks,We have just migrated from Oracle to PG.We have a database that has approx 3 mil rows and one of the columns has a cardinalityof only 0.1% (3000 unique values). We have to issue several queries that use this low cardinality column in a WHERE clauseas well as see this column participating in JOINS (ouch!).A regular B-Tree index has been created on these columns.In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw performance gothrough the roof. I know Postgres does not have Bitmap indexes,but is there a reasonable alternative to boost performance in situations where low cardinalitycolumns are involved ?I dont have the option of changing schemas - so please dont go there\n :)TIA,VK", "msg_date": "Fri, 16 Oct 2009 16:36:57 -0700 (PDT)", "msg_from": "Vikul Khosla <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes on low cardinality columns" }, { "msg_contents": "On Fri, Oct 16, 2009 at 4:36 PM, Vikul Khosla <[email protected]> wrote:\n> In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw\n> performance go\n> through the roof. I know Postgres does not have Bitmap indexes,\n> but is there a reasonable alternative to boost performance in situations\n> where low cardinality\n> columns are involved ?\n\nDo you need to query on all of the 3,000 values?\n\nIf it's just particular values which are common i would suggest using\npartial indexes on some other column with a where clause restricting\nthem to only one value in the low-cardinality column. But I wouldn't\nwant to have 3,000 indexes.\n\nAlternately you could try partitioning the table, though 3,000\npartitions is a lot too. If you often update this value then\npartitioning wouldn't work well anyways (but then bitmap indexes\nwouldn't have worked well in oracle either)\n\n-- \ngreg\n", "msg_date": "Fri, 16 Oct 2009 18:27:15 -0700", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes on low cardinality columns" }, { "msg_contents": "Thanks Greg!.\n\nYes, we do need to query on all 3000 values ... potentially. Considering\nthat when we changed the B-Tree indexes to Bitmap indexes in Oracle\nwe saw a huge performance boost ... doesn't that suggest that absence of this\nfeature in PG is a constraint ?\n\nAre there any other clever workarounds to boosting performance involving\nlow queries on low cardinality columns ? i.e avoiding a full table scan ?\n\nVK\n\n\n\n\n________________________________\nFrom: Greg Stark <[email protected]>\nTo: Vikul Khosla <[email protected]>\nCc: [email protected]\nSent: Fri, October 16, 2009 8:27:15 PM\nSubject: Re: [PERFORM] Indexes on low cardinality columns\n\nOn Fri, Oct 16, 2009 at 4:36 PM, Vikul Khosla <[email protected]> wrote:\n> In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw\n> performance go\n> through the roof. I know Postgres does not have Bitmap indexes,\n> but is there a reasonable alternative to boost performance in situations\n> where low cardinality\n> columns are involved ?\n\nDo you need to query on all of the 3,000 values?\n\nIf it's just particular values which are common i would suggest using\npartial indexes on some other column with a where clause restricting\nthem to only one value in the low-cardinality column. But I wouldn't\nwant to have 3,000 indexes.\n\nAlternately you could try partitioning the table, though 3,000\npartitions is a lot too. If you often update this value then\npartitioning wouldn't work well anyways (but then bitmap indexes\nwouldn't have worked well in oracle either)\n\n-- \ngreg\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nThanks Greg!.Yes, we do need to query on all 3000 values ... potentially. Consideringthat when we changed the B-Tree indexes to Bitmap indexes in Oraclewe saw a huge performance boost ... doesn't that suggest that absence of thisfeature in PG is a constraint ?Are there any other clever workarounds to boosting performance involvinglow queries on low cardinality columns ? i.e avoiding a full table scan ?VKFrom: Greg Stark <[email protected]>To: Vikul Khosla\n <[email protected]>Cc: [email protected]: Fri, October 16, 2009 8:27:15 PMSubject: Re: [PERFORM] Indexes on low cardinality columns\nOn Fri, Oct 16, 2009 at 4:36 PM, Vikul Khosla <[email protected]> wrote:> In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw> performance go> through the roof. I know Postgres does not have Bitmap indexes,> but is there a reasonable alternative to boost performance in situations> where low cardinality> columns are involved ?Do you need to query on all of the 3,000 values?If it's just particular values which are common i would suggest usingpartial indexes on some other column with a where clause restrictingthem to only one value in the low-cardinality column. But I wouldn'twant to have 3,000 indexes.Alternately you could try partitioning the table, though 3,000partitions is a lot too. If you often update this value thenpartitioning wouldn't work well anyways (but\n then bitmap indexeswouldn't have worked well in oracle either)-- greg-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 17 Oct 2009 08:21:38 -0700 (PDT)", "msg_from": "Vikul Khosla <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes on low cardinality columns" }, { "msg_contents": "Thanks Greg!.\n\nYes, we do need to query on all 3000 values ... potentially. Considering\nthat when we changed the B-Tree indexes to Bitmap indexes in Oracle\nwe saw a huge performance boost ... doesn't that suggest that absence of this\nfeature in PG is a constraint ?\n\nAre there any other clever workarounds to boosting performance involving\nlow queries on low cardinality columns ? i.e avoiding a full table scan ?\n\nVK\n\n________________________________\nFrom: Greg Stark <[email protected]>\nTo: Vikul Khosla <[email protected]>\nCc: [email protected]\nSent: Fri, October 16, 2009 8:27:15 PM\nSubject: Re: [PERFORM] Indexes on low cardinality columns\n\nOn Fri, Oct 16, 2009 at 4:36 PM, Vikul Khosla <[email protected]> wrote:\n> In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw\n> performance go\n> through the roof. I know Postgres does not have Bitmap indexes,\n> but is there a reasonable alternative to boost performance in situations\n> where low cardinality\n> columns are involved ?\n\nDo you need to query on all of the 3,000 values?\n\nIf it's just particular values which are common i would suggest using\npartial indexes on some other column with a where clause restricting\nthem to only one value in the low-cardinality column. But I wouldn't\nwant to have 3,000 indexes.\n\nAlternately you could try partitioning the table, though 3,000\npartitions is a lot too. If you often update this value then\npartitioning wouldn't work well anyways (but then bitmap indexes\nwouldn't have worked well in oracle either)\n\n-- \ngreg\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nThanks Greg!.Yes, we do need to query on all 3000 values ... potentially. Consideringthat when we changed the B-Tree indexes to Bitmap indexes in Oraclewe saw a huge performance boost ... doesn't that suggest that absence of thisfeature in PG is a constraint ?Are there any other clever workarounds to boosting performance involvinglow queries on low cardinality columns ? i.e avoiding a full table scan ?VKFrom: Greg Stark <[email protected]>To: Vikul Khosla\n <[email protected]>Cc: [email protected]: Fri, October 16, 2009 8:27:15 PMSubject: Re: [PERFORM] Indexes on low cardinality columns\nOn Fri, Oct 16, 2009 at 4:36 PM, Vikul Khosla <[email protected]> wrote:> In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw> performance go> through the roof. I know Postgres does not have Bitmap indexes,> but is there a reasonable alternative to boost performance in situations> where low cardinality> columns are involved ?Do you need to query on all of the 3,000 values?If it's just particular values which are common i would suggest usingpartial indexes on some other column with a where clause restrictingthem to only one value in the low-cardinality column. But I wouldn'twant to have 3,000 indexes.Alternately you could try partitioning the table, though 3,000partitions is a lot too. If you often update this value thenpartitioning wouldn't work well anyways (but\n then bitmap indexeswouldn't have worked well in oracle either)-- greg-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 17 Oct 2009 10:02:55 -0700 (PDT)", "msg_from": "Vikul Khosla <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes on low cardinality columns" }, { "msg_contents": "On Sat, Oct 17, 2009 at 1:02 PM, Vikul Khosla <[email protected]> wrote:\n>\n> Thanks Greg!.\n>\n> Yes, we do need to query on all 3000 values ... potentially. Considering\n> that when we changed the B-Tree indexes to Bitmap indexes in Oracle\n> we saw a huge performance boost ... doesn't that suggest that absence of\n> this\n> feature in PG is a constraint ?\n\nMaybe, but it's hard to speculate since you haven't provided any data. :-)\n\nAre you running PG on the same hardware you used for Oracle? Have you\ntuned postgresql.conf? What is the actual runtime of your query under\nOracle with a btree index, Oracle with a bitmap index, and PostgreSQL\nwith a btree index?\n\nIt's not immediately obvious to me why a bitmap index would be better\nfor a case with so many distinct values. Seems like the bitmap would\ntend to be sparse. But I'm just hand-waving here, since we have no\nactual performance data to look at. Keep in mind that PostgreSQL will\nconstruct an in-memory bitmap from a B-tree index in some situations,\nwhich can be quite fast. That begs the question of what the planner\nis deciding to do now - it would be really helpful if you could post\nsome EXPLAIN ANALYZE results.\n\n> Are there any other clever workarounds to boosting performance involving\n> low queries on low cardinality columns ? i.e avoiding a full table scan ?\n\nHere again, if you post the EXPLAIN ANALYZE results from your queries,\nit might be possible for folks on this list to offer some more\nspecific suggestions.\n\nIf you query mostly on this column, you could try clustering the table\non that column (and re-analyzing).\n\n...Robert\n", "msg_date": "Sat, 17 Oct 2009 13:33:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes on low cardinality columns" }, { "msg_contents": "On Sat, Oct 17, 2009 at 10:02 AM, Vikul Khosla <[email protected]> wrote:\n>\n> Thanks Greg!.\n>\n> Yes, we do need to query on all 3000 values ... potentially. Considering\n> that when we changed the B-Tree indexes to Bitmap indexes in Oracle\n> we saw a huge performance boost ... doesn't that suggest that absence of\n> this\n> feature in PG is a constraint ?\n\nWas the bitmap index in Oracle used all by itself, or was it used in\nconcert with other bitmaps (either native bitmap indexes or a bitmap\nconversion of a non-bitmap index) to produce the speed up?\n\n> Are there any other clever workarounds to boosting performance involving\n> low queries on low cardinality columns ? i.e avoiding a full table scan ?\n\nHave you tired setting enable_seqscan=off to see what plan that\nproduces and whether it is faster or slower? If it is better, then\nlowering random_page_cost or increasing cpu_tuple_cost might help\nmotivate it to make that decision without having to resort to\nenable_seqscan. Of course tuning those setting just to focus on one\nquery could backfire rather badly.\n\n\nJeff\n", "msg_date": "Sat, 17 Oct 2009 14:24:40 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes on low cardinality columns" }, { "msg_contents": "If the table can be clustered on that column, I suspect\nit'd be a nice case for the grouped index tuples patch\nhttp://community.enterprisedb.com/git/\n\nActually, simply clustering on that column might give\nmajor speedups anyway.\n\nVikul Khosla wrote:\n> Folks,\n> \n> We have just migrated from Oracle to PG.\n> \n> We have a database that has approx 3 mil rows and one of the columns has\n> a cardinality\n> of only 0.1% (3000 unique values).\n> \n> We have to issue several queries that use this low cardinality column in\n> a WHERE clause\n> as well as see this column participating in JOINS (ouch!).\n> \n> A regular B-Tree index has been created on these columns.\n> \n> In Oracle, we replaced the B-Tree Indexes with Bitmap indexes and saw\n> performance go\n> through the roof. I know Postgres does not have Bitmap indexes,\n> but is there a reasonable alternative to boost performance in situations\n> where low cardinality\n> columns are involved ?\n> \n> I dont have the option of changing schemas - so please dont go there :)\n> \n> TIA,\n> VK\n\n", "msg_date": "Mon, 19 Oct 2009 06:59:43 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes on low cardinality columns" } ]